Skip to content

shivamrajsr07/OpenBench

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

2 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

๐Ÿš€ OpenBench

A modular benchmarking framework for scientific Python projects with performance profiling, memory tracking, structured reporting, and visualization support.

๐Ÿ“Œ Overview

OpenBench is a lightweight, extensible benchmarking framework designed for scientific and performance-critical Python ecosystems.

It enables developers to:

Measure execution time accurately

Track memory usage

Run multiple benchmark iterations

Generate structured JSON reports

Integrate benchmarking into development workflows

Visualize performance trends

OpenBench aims to standardize performance evaluation across scientific Python libraries such as NumPy, SciPy, pandas, and other NumFOCUS-affiliated projects.

๐ŸŽฏ Motivation

Performance benchmarking in scientific Python projects is often:

Inconsistent across repositories

Difficult to reproduce

Lacking structured reporting

Missing regression tracking

Each project typically implements custom benchmarking logic, leading to fragmentation.

OpenBench provides a reusable benchmarking core that promotes:

Reproducibility

Modularity

Clear performance metrics

CI integration potential

Extensibility for scientific workloads

๐Ÿ— Architecture OpenBench/ โ”‚ โ”œโ”€โ”€ openbench/ โ”‚ โ”œโ”€โ”€ core.py # Time & memory profiling logic โ”‚ โ”œโ”€โ”€ runner.py # Multi-run orchestration โ”‚ โ”œโ”€โ”€ reporter.py # JSON report generation โ”‚ โ””โ”€โ”€ visualization.py # Performance plotting โ”‚ โ”œโ”€โ”€ examples/ # Sample benchmark workloads โ”œโ”€โ”€ tests/ # Pytest test suite โ”œโ”€โ”€ run.py # CLI entry point โ”œโ”€โ”€ pyproject.toml โ””โ”€โ”€ README.md

โš™๏ธ Installation

Clone the repository:

git clone https://github.com/shivamrajsr07/OpenBench.git cd OpenBench

Create and activate a virtual environment:

python -m venv venv venv\Scripts\activate

Install dependencies:

pip install -r requirements.txt

Install the package in editable mode:

pip install -e .

๐Ÿš€ Usage

Run the benchmark CLI:

python run.py --runs 5

This will:

Execute benchmark workload multiple times

Measure execution time

Track memory consumption

Compute average metrics

๐Ÿ“Š Example Output Benchmark Completed Average Time: 0.0432 seconds Average Memory: 1.87 MB

JSON report generation is supported via the reporting module.

๐Ÿงช Running Tests

OpenBench includes test coverage using pytest.

Run tests:

python -m pytest

๐Ÿ“ˆ Visualization

Performance results can be visualized using the visualization module:

Execution time per run

Trend plotting

Performance comparison (future roadmap)

๐Ÿ”ฎ Roadmap

Planned enhancements:

Benchmark decorators for simplified usage

Historical regression tracking

HTML dashboard reports

GitHub Actions integration

CI-based performance alerts

Plugin architecture for scientific ecosystems

PyPI packaging

๐ŸŽฏ Target Ecosystem

OpenBench is designed to integrate with:

NumPy

SciPy

pandas

NetworkX

scikit-learn

Other NumFOCUS scientific Python projects

๐Ÿง  Why OpenBench Matters

Performance tooling is critical for:

Research reproducibility

Scientific computing reliability

Detecting regressions in numerical libraries

Maintaining performance guarantees

OpenBench provides a foundation for systematic benchmarking in scientific Python.

๐Ÿ“„ License

MIT License

๐Ÿ‘ค Author

Shivam Computer Science Student | Scientific Python & Performance Engineering Enthusiast

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages