๐ OpenBench
A modular benchmarking framework for scientific Python projects with performance profiling, memory tracking, structured reporting, and visualization support.
๐ Overview
OpenBench is a lightweight, extensible benchmarking framework designed for scientific and performance-critical Python ecosystems.
It enables developers to:
Measure execution time accurately
Track memory usage
Run multiple benchmark iterations
Generate structured JSON reports
Integrate benchmarking into development workflows
Visualize performance trends
OpenBench aims to standardize performance evaluation across scientific Python libraries such as NumPy, SciPy, pandas, and other NumFOCUS-affiliated projects.
๐ฏ Motivation
Performance benchmarking in scientific Python projects is often:
Inconsistent across repositories
Difficult to reproduce
Lacking structured reporting
Missing regression tracking
Each project typically implements custom benchmarking logic, leading to fragmentation.
OpenBench provides a reusable benchmarking core that promotes:
Reproducibility
Modularity
Clear performance metrics
CI integration potential
Extensibility for scientific workloads
๐ Architecture OpenBench/ โ โโโ openbench/ โ โโโ core.py # Time & memory profiling logic โ โโโ runner.py # Multi-run orchestration โ โโโ reporter.py # JSON report generation โ โโโ visualization.py # Performance plotting โ โโโ examples/ # Sample benchmark workloads โโโ tests/ # Pytest test suite โโโ run.py # CLI entry point โโโ pyproject.toml โโโ README.md
โ๏ธ Installation
Clone the repository:
git clone https://github.com/shivamrajsr07/OpenBench.git cd OpenBench
Create and activate a virtual environment:
python -m venv venv venv\Scripts\activate
Install dependencies:
pip install -r requirements.txt
Install the package in editable mode:
pip install -e .
๐ Usage
Run the benchmark CLI:
python run.py --runs 5
This will:
Execute benchmark workload multiple times
Measure execution time
Track memory consumption
Compute average metrics
๐ Example Output Benchmark Completed Average Time: 0.0432 seconds Average Memory: 1.87 MB
JSON report generation is supported via the reporting module.
๐งช Running Tests
OpenBench includes test coverage using pytest.
Run tests:
python -m pytest
๐ Visualization
Performance results can be visualized using the visualization module:
Execution time per run
Trend plotting
Performance comparison (future roadmap)
๐ฎ Roadmap
Planned enhancements:
Benchmark decorators for simplified usage
Historical regression tracking
HTML dashboard reports
GitHub Actions integration
CI-based performance alerts
Plugin architecture for scientific ecosystems
PyPI packaging
๐ฏ Target Ecosystem
OpenBench is designed to integrate with:
NumPy
SciPy
pandas
NetworkX
scikit-learn
Other NumFOCUS scientific Python projects
๐ง Why OpenBench Matters
Performance tooling is critical for:
Research reproducibility
Scientific computing reliability
Detecting regressions in numerical libraries
Maintaining performance guarantees
OpenBench provides a foundation for systematic benchmarking in scientific Python.
๐ License
MIT License
๐ค Author
Shivam Computer Science Student | Scientific Python & Performance Engineering Enthusiast