A Python-based containerized application that automatically collects Jira sprint data every 15 minutes and provides comprehensive sprint metrics analysis including velocity, predictability, churn, and spillover tracking.
- Automated Data Collection: Collects burndown data every 15 minutes from Jira
- Sprint Metrics Analysis: Calculates velocity, predictability, churn, and spillover metrics
- Database Storage: SQLite database with optimized schema and query performance
- Containerized Deployment: Docker and Docker Compose support for easy deployment
- REST API: FastAPI-based endpoints for data access and visualization
- Error Resilience: Comprehensive error handling with automatic retry logic
- Performance Monitoring: Built-in calculation performance tracking
- Comprehensive Testing: 80%+ test coverage with extensive edge case testing
- Entry Point:
src/main.py- Main application with graceful shutdown - Configuration:
src/config/settings.py- Environment-based configuration - Database Layer:
src/database/models.py- SQLite operations with enhanced schema - Query Optimization:
src/database/query_optimizer.py- Performance optimization and caching - Jira Integration:
src/jira/client_with_db.py- API client with database integration - Sprint Metrics Engine:
src/metrics/sprint_calculator.py- Comprehensive metrics calculation - Error Management:
src/metrics/error_handling.py- Advanced error handling and retry logic - Performance Monitoring:
src/metrics/monitoring.py- Real-time calculation performance tracking - Task Scheduling:
src/scheduler/task_scheduler.py- 15-minute periodic data collection - REST API:
src/api/- FastAPI endpoints for data access
Four interconnected tables with foreign key relationships:
sprints- Sprint metadata and configurationsprint_snapshots- Point-in-time burndown data with cumulative trackingwork_items- Individual story/task detailssprint_metrics- Calculated velocity, predictability, churn, and spillover metrics
- Velocity: 6-sprint rolling average with LRU caching
- Predictability: Delivered vs planned work percentage
- Churn: Scope change tracking and measurement
- Spillover: Incomplete work percentage calculation
- Performance Monitoring: Built-in calculation duration and success rate tracking
- Python 3.8+
- Docker and Docker Compose (for containerized deployment)
- Jira access token with appropriate permissions
-
Clone the repository
git clone <repository-url> cd burndown
-
Set up virtual environment
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
-
Install dependencies
pip install -r requirements.txt
-
Configure Jira credentials See the instructions in ./secrets_example
-
Set environment variables
export JIRA_BOARD_ID=your_board_id export JIRA_USERNAME=your_username export JIRA_INSTANCE_URL=https://your-instance.atlassian.net
# Activate virtual environment
source venv/bin/activate
# Run the main application
python src/main.py# Build and run with Docker Compose
docker-compose up --build
# Run in background
docker-compose up -d
# View logs
docker-compose logs -f# Run all tests with coverage
python -m pytest tests/ --cov=src --cov-report=term-missing
# Run specific test file
python run_tests.py test_jira_client.py
# Run all tests using custom test runner
python run_tests.py
# Run sprint metrics tests specifically
python -m pytest tests/test_sprint_metrics.py -v# Calculate current sprint metrics
python -c "from src.metrics.sprint_calculator import SprintMetricsCalculator; calc = SprintMetricsCalculator(); print(calc.calculate_current_sprint_metrics())"
# Calculate velocity for last 6 sprints
python -c "from src.metrics.sprint_calculator import SprintMetricsCalculator; calc = SprintMetricsCalculator(); print(f'Velocity: {calc.calculate_velocity()}')"
# Generate comprehensive metrics report
python -c "from src.metrics.sprint_calculator import SprintMetricsCalculator; calc = SprintMetricsCalculator(); metrics = calc.calculate_all_metrics(); print(f'Velocity: {metrics[\"velocity\"]}, Predictability: {metrics[\"predictability\"]}%, Churn: {metrics[\"churn\"]}%, Spillover: {metrics[\"spillover\"]}%')"The application includes a FastAPI-based REST API for accessing data:
- GET
/burndown/{sprint_id}- Get burndown data for a specific sprint - GET
/metrics/{sprint_id}- Get calculated metrics for a specific sprint - GET
/metrics/velocity- Get velocity trends - POST
/sprints- Create a new sprint tracking entry
Start the API server:
uvicorn src.api.main:app --reloadJIRA_BOARD_ID- Your Jira board IDJIRA_USERNAME- Your Jira usernameJIRA_INSTANCE_URL- Your Jira instance URLDATABASE_PATH- Path to SQLite database file (default:data/burndown.db)LOG_LEVEL- Logging level (default:INFO)
secrets/jira_access_token.txt- Jira API access tokendocker-compose.yml- Docker deployment configurationpytest.ini- Test configuration with coverage settings
burndown/
├── src/
│ ├── api/ # FastAPI REST endpoints
│ ├── config/ # Configuration management
│ ├── database/ # Database models and optimization
│ ├── jira/ # Jira API integration
│ ├── metrics/ # Sprint metrics calculation
│ └── scheduler/ # Task scheduling
├── tests/ # Test suite
├── secrets/ # Configuration files
├── data/ # Database storage
├── docker-compose.yml # Container orchestration
├── requirements.txt # Python dependencies
└── README.md # This file
- atlassian-python-api (3.41.14) - Jira API integration
- loguru (0.7.2) - Advanced logging
- tenacity (8.2.3) - Retry logic for error handling
- schedule (1.2.0) - Task scheduling
- fastapi (0.104.1) - REST API framework
- pytest (7.4.3) - Testing framework
- pytest-cov (4.1.0) - Test coverage reporting
The project maintains 80%+ test coverage with comprehensive testing:
- Unit Tests: Individual function and method testing
- Integration Tests: Database operations and cross-module functionality
- Error Handling Tests: Exception scenarios and retry logic
- Performance Tests: Calculation speed and caching effectiveness
Test files:
tests/test_jira_client.py- Jira integration teststests/test_database.py- Database operation teststests/test_scheduler.py- Scheduling functionality teststests/test_sprint_metrics.py- Comprehensive metrics calculation tests (20+ test cases)
Error Handling System:
- Custom exception hierarchy for different error types
- Automatic retry logic for transient failures
- Data validation and integrity checks
- Graceful degradation for partial data scenarios
Performance Optimization:
- LRU caching for velocity calculations
- Database query optimization and indexing
- Batch operations for large datasets
- Built-in performance monitoring and tracking
- ✅ Core MVP Complete - Data collection, database, scheduling
- ✅ Sprint Metrics Engine - Velocity, predictability, churn, spillover calculation
- ✅ Containerization - Docker and Docker Compose deployment
- ✅ Enhanced Database - Optimized schema with performance improvements
- ✅ Error Handling - Comprehensive retry logic and custom exceptions
- ✅ Performance Monitoring - Built-in calculation tracking
- ✅ REST API - FastAPI endpoints for data access
- ✅ Testing Suite - 80%+ coverage with extensive test cases
This project follows established engineering principles:
- KISS (Keep It Simple) - Uses proven libraries rather than custom implementations
- YAGNI (You Ain't Gonna Need It) - Implements only required MVP features
- SOLID Principles - Modular design with single responsibility classes
- Comprehensive Testing - Extensive test coverage with edge case scenarios
- Performance First - Built-in monitoring and optimization from the start
- Error Resilience - Production-ready error handling and recovery
- Ensure virtual environment is activated:
source venv/bin/activate - Run tests before making changes:
python run_tests.py - Follow existing code patterns and conventions
- Maintain test coverage above 80%
- Update documentation as needed