Production-ready multicalibration for machine learning
MCGrad is a scalable and easy-to-use tool for multicalibration. It ensures your ML model predictions are well-calibrated not just globally (across all data), but also across virtually any segment defined by your features (e.g., by country, content type, or any combination).
Traditional calibration methods, like Isotonic Regression or Platt Scaling, only ensure global calibration—meaning predicted probabilities match observed outcomes on average across all data—but your model can still be systematically overconfident or underconfident for specific groups. MCGrad automatically identifies and corrects these hidden calibration gaps without requiring you to manually specify protected groups.
A globally well-calibrated model: predictions match observed outcomes on average.
The same model showing hidden miscalibration when broken down by segment. MCGrad fixes this.
- Powerful Multicalibration — Calibrates across unlimited segments without pre-specification
- Data Efficient — Like modern ML methods
- Lightweight & Fast — Adds limited latency at training and inference time
- Improved Performance — Likelihood-improving with significant PRAUC gains
- Safe by Design — Cannot harm base model performance on training data
MCGrad has been deployed at Meta on hundreds of production models. See the research paper for detailed experimental results.
Requirements: Python 3.10+
Stable release:
pip install mcgradLatest development version:
pip install git+https://github.com/facebookincubator/MCGrad.gitfrom mcgrad import methods
import numpy as np
import pandas as pd
# Prepare your data in a DataFrame
df = pd.DataFrame({
'prediction': np.array([0.1, 0.3, 0.7, 0.9, 0.5, 0.2]), # Your model's predictions
'label': np.array([0, 0, 1, 1, 1, 0]), # Ground truth labels
'country': ['US', 'UK', 'US', 'UK', 'US', 'UK'], # Categorical feature
'content_type': ['photo', 'video', 'photo', 'video', 'photo', 'video'], # Categorical feature
})
# Apply MCGrad
mcgrad = methods.MCGrad()
mcgrad.fit(
df_train=df,
prediction_column_name='prediction',
label_column_name='label',
categorical_feature_column_names=['country', 'content_type']
)
# Get calibrated predictions
calibrated_predictions = mcgrad.predict(
df=df,
prediction_column_name='prediction',
categorical_feature_column_names=['country', 'content_type']
)
# Returns: numpy array of calibrated probabilities, e.g., [0.12, 0.28, 0.72, ...]- Website & Guides: mcgrad.dev
- Why MCGrad? — Learn about the challenges MCGrad solves
- Quick Start — Get started quickly
- Methodology — Deep dive into how MCGrad works
- API Reference — Full API documentation
- Questions & Bugs: Open an issue on GitHub Issues
- Contributing: See CONTRIBUTING.md for guidelines on how to contribute to MCGrad
If you use MCGrad in your research, please cite our paper.
@inproceedings{tax2026mcgrad,
title={{MCGrad: Multicalibration at Web Scale}},
author={Tax, Niek and Perini, Lorenzo and Linder, Fridolin and Haimovich, Daniel and Karamshuk, Dima and Okati, Nastaran and Vojnovic, Milan and Apostolopoulos, Pavlos Athanasios},
booktitle={Proceedings of the 32nd ACM SIGKDD Conference on Knowledge Discovery and Data Mining V.1 (KDD 2026)},
year={2026},
doi={10.1145/3770854.3783954}
}Some of our team's other work on multicalibration:
-
A New Metric to Measure Multicalibration: Guy, I., Haimovich, D., Linder, F., Okati, N., Perini, L., Tax, N., & Tygert, M. (2025). Measuring multi-calibration. arXiv:2506.11251.
-
Theoretical Results on Value of Multicalibration: Baldeschi, R. C., Di Gregorio, S., Fioravanti, S., Fusco, F., Guy, I., Haimovich, D., Leonardi, S., et al. (2025). Multicalibration yields better matchings. arXiv:2511.11413.


