Skip to content

magammon/burndown

Repository files navigation

Burndown - Jira Sprint Monitoring & Metrics Analysis

A Python-based containerized application that automatically collects Jira sprint data every 15 minutes and provides comprehensive sprint metrics analysis including velocity, predictability, churn, and spillover tracking.

Features

  • Automated Data Collection: Collects burndown data every 15 minutes from Jira
  • Sprint Metrics Analysis: Calculates velocity, predictability, churn, and spillover metrics
  • Database Storage: SQLite database with optimized schema and query performance
  • Containerized Deployment: Docker and Docker Compose support for easy deployment
  • REST API: FastAPI-based endpoints for data access and visualization
  • Error Resilience: Comprehensive error handling with automatic retry logic
  • Performance Monitoring: Built-in calculation performance tracking
  • Comprehensive Testing: 80%+ test coverage with extensive edge case testing

Architecture

Core Components

  • Entry Point: src/main.py - Main application with graceful shutdown
  • Configuration: src/config/settings.py - Environment-based configuration
  • Database Layer: src/database/models.py - SQLite operations with enhanced schema
  • Query Optimization: src/database/query_optimizer.py - Performance optimization and caching
  • Jira Integration: src/jira/client_with_db.py - API client with database integration
  • Sprint Metrics Engine: src/metrics/sprint_calculator.py - Comprehensive metrics calculation
  • Error Management: src/metrics/error_handling.py - Advanced error handling and retry logic
  • Performance Monitoring: src/metrics/monitoring.py - Real-time calculation performance tracking
  • Task Scheduling: src/scheduler/task_scheduler.py - 15-minute periodic data collection
  • REST API: src/api/ - FastAPI endpoints for data access

Database Schema

Four interconnected tables with foreign key relationships:

  • sprints - Sprint metadata and configuration
  • sprint_snapshots - Point-in-time burndown data with cumulative tracking
  • work_items - Individual story/task details
  • sprint_metrics - Calculated velocity, predictability, churn, and spillover metrics

Sprint Metrics Capabilities

  • Velocity: 6-sprint rolling average with LRU caching
  • Predictability: Delivered vs planned work percentage
  • Churn: Scope change tracking and measurement
  • Spillover: Incomplete work percentage calculation
  • Performance Monitoring: Built-in calculation duration and success rate tracking

Quick Start

Prerequisites

  • Python 3.8+
  • Docker and Docker Compose (for containerized deployment)
  • Jira access token with appropriate permissions

Installation

  1. Clone the repository

    git clone <repository-url>
    cd burndown
  2. Set up virtual environment

    python -m venv venv
    source venv/bin/activate  # On Windows: venv\Scripts\activate
  3. Install dependencies

    pip install -r requirements.txt
  4. Configure Jira credentials See the instructions in ./secrets_example

  5. Set environment variables

    export JIRA_BOARD_ID=your_board_id
    export JIRA_USERNAME=your_username
    export JIRA_INSTANCE_URL=https://your-instance.atlassian.net

Running the Application

Local Development

# Activate virtual environment
source venv/bin/activate

# Run the main application
python src/main.py

Docker Deployment

# Build and run with Docker Compose
docker-compose up --build

# Run in background
docker-compose up -d

# View logs
docker-compose logs -f

Running Tests

# Run all tests with coverage
python -m pytest tests/ --cov=src --cov-report=term-missing

# Run specific test file
python run_tests.py test_jira_client.py

# Run all tests using custom test runner
python run_tests.py

# Run sprint metrics tests specifically
python -m pytest tests/test_sprint_metrics.py -v

Usage Examples

Sprint Metrics Calculation

# Calculate current sprint metrics
python -c "from src.metrics.sprint_calculator import SprintMetricsCalculator; calc = SprintMetricsCalculator(); print(calc.calculate_current_sprint_metrics())"

# Calculate velocity for last 6 sprints
python -c "from src.metrics.sprint_calculator import SprintMetricsCalculator; calc = SprintMetricsCalculator(); print(f'Velocity: {calc.calculate_velocity()}')"

# Generate comprehensive metrics report
python -c "from src.metrics.sprint_calculator import SprintMetricsCalculator; calc = SprintMetricsCalculator(); metrics = calc.calculate_all_metrics(); print(f'Velocity: {metrics[\"velocity\"]}, Predictability: {metrics[\"predictability\"]}%, Churn: {metrics[\"churn\"]}%, Spillover: {metrics[\"spillover\"]}%')"

API Endpoints

The application includes a FastAPI-based REST API for accessing data:

  • GET /burndown/{sprint_id} - Get burndown data for a specific sprint
  • GET /metrics/{sprint_id} - Get calculated metrics for a specific sprint
  • GET /metrics/velocity - Get velocity trends
  • POST /sprints - Create a new sprint tracking entry

Start the API server:

uvicorn src.api.main:app --reload

Configuration

Environment Variables

  • JIRA_BOARD_ID - Your Jira board ID
  • JIRA_USERNAME - Your Jira username
  • JIRA_INSTANCE_URL - Your Jira instance URL
  • DATABASE_PATH - Path to SQLite database file (default: data/burndown.db)
  • LOG_LEVEL - Logging level (default: INFO)

Configuration Files

  • secrets/jira_access_token.txt - Jira API access token
  • docker-compose.yml - Docker deployment configuration
  • pytest.ini - Test configuration with coverage settings

Development

Project Structure

burndown/
├── src/
│   ├── api/                    # FastAPI REST endpoints
│   ├── config/                 # Configuration management
│   ├── database/               # Database models and optimization
│   ├── jira/                   # Jira API integration
│   ├── metrics/                # Sprint metrics calculation
│   └── scheduler/              # Task scheduling
├── tests/                      # Test suite
├── secrets/                    # Configuration files
├── data/                       # Database storage
├── docker-compose.yml          # Container orchestration
├── requirements.txt            # Python dependencies
└── README.md                   # This file

Key Dependencies

  • atlassian-python-api (3.41.14) - Jira API integration
  • loguru (0.7.2) - Advanced logging
  • tenacity (8.2.3) - Retry logic for error handling
  • schedule (1.2.0) - Task scheduling
  • fastapi (0.104.1) - REST API framework
  • pytest (7.4.3) - Testing framework
  • pytest-cov (4.1.0) - Test coverage reporting

Testing

The project maintains 80%+ test coverage with comprehensive testing:

  • Unit Tests: Individual function and method testing
  • Integration Tests: Database operations and cross-module functionality
  • Error Handling Tests: Exception scenarios and retry logic
  • Performance Tests: Calculation speed and caching effectiveness

Test files:

  • tests/test_jira_client.py - Jira integration tests
  • tests/test_database.py - Database operation tests
  • tests/test_scheduler.py - Scheduling functionality tests
  • tests/test_sprint_metrics.py - Comprehensive metrics calculation tests (20+ test cases)

Error Handling & Performance

Error Handling System:

  • Custom exception hierarchy for different error types
  • Automatic retry logic for transient failures
  • Data validation and integrity checks
  • Graceful degradation for partial data scenarios

Performance Optimization:

  • LRU caching for velocity calculations
  • Database query optimization and indexing
  • Batch operations for large datasets
  • Built-in performance monitoring and tracking

Implementation Status

  • Core MVP Complete - Data collection, database, scheduling
  • Sprint Metrics Engine - Velocity, predictability, churn, spillover calculation
  • Containerization - Docker and Docker Compose deployment
  • Enhanced Database - Optimized schema with performance improvements
  • Error Handling - Comprehensive retry logic and custom exceptions
  • Performance Monitoring - Built-in calculation tracking
  • REST API - FastAPI endpoints for data access
  • Testing Suite - 80%+ coverage with extensive test cases

Engineering Principles

This project follows established engineering principles:

  • KISS (Keep It Simple) - Uses proven libraries rather than custom implementations
  • YAGNI (You Ain't Gonna Need It) - Implements only required MVP features
  • SOLID Principles - Modular design with single responsibility classes
  • Comprehensive Testing - Extensive test coverage with edge case scenarios
  • Performance First - Built-in monitoring and optimization from the start
  • Error Resilience - Production-ready error handling and recovery

Contributing

  1. Ensure virtual environment is activated: source venv/bin/activate
  2. Run tests before making changes: python run_tests.py
  3. Follow existing code patterns and conventions
  4. Maintain test coverage above 80%
  5. Update documentation as needed

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors