Skip to content

danielha1/MCGrad

 
 

Repository files navigation

MCGrad: Production-ready multicalibration for machine learning

MCGrad

Production-ready multicalibration for machine learning

CI License: MIT Python 3.10+ Documentation


What is MCGrad?

MCGrad is a scalable and easy-to-use tool for multicalibration. It ensures your ML model predictions are well-calibrated not just globally (across all data), but also across virtually any segment defined by your features (e.g., by country, content type, or any combination).

Traditional calibration methods, like Isotonic Regression or Platt Scaling, only ensure global calibration—meaning predicted probabilities match observed outcomes on average across all data—but your model can still be systematically overconfident or underconfident for specific groups. MCGrad automatically identifies and corrects these hidden calibration gaps without requiring you to manually specify protected groups.

Global calibration curve showing well-calibrated predictions on average

A globally well-calibrated model: predictions match observed outcomes on average.

Segment-level calibration curves revealing hidden miscalibration in specific groups

The same model showing hidden miscalibration when broken down by segment. MCGrad fixes this.

🌟 Key Features

  • Powerful Multicalibration — Calibrates across unlimited segments without pre-specification
  • Data Efficient — Like modern ML methods
  • Lightweight & Fast — Adds limited latency at training and inference time
  • Improved Performance — Likelihood-improving with significant PRAUC gains
  • Safe by Design — Cannot harm base model performance on training data

🏭 Production Proven

MCGrad has been deployed at Meta on hundreds of production models. See the research paper for detailed experimental results.

📦 Installation

Requirements: Python 3.10+

Stable release:

pip install mcgrad

Latest development version:

pip install git+https://github.com/facebookincubator/MCGrad.git

🚀 Quick Start

from mcgrad import methods
import numpy as np
import pandas as pd

# Prepare your data in a DataFrame
df = pd.DataFrame({
    'prediction': np.array([0.1, 0.3, 0.7, 0.9, 0.5, 0.2]),  # Your model's predictions
    'label': np.array([0, 0, 1, 1, 1, 0]),  # Ground truth labels
    'country': ['US', 'UK', 'US', 'UK', 'US', 'UK'],  # Categorical feature
    'content_type': ['photo', 'video', 'photo', 'video', 'photo', 'video'],  # Categorical feature
})

# Apply MCGrad
mcgrad = methods.MCGrad()
mcgrad.fit(
    df_train=df,
    prediction_column_name='prediction',
    label_column_name='label',
    categorical_feature_column_names=['country', 'content_type']
)

# Get calibrated predictions
calibrated_predictions = mcgrad.predict(
    df=df,
    prediction_column_name='prediction',
    categorical_feature_column_names=['country', 'content_type']
)
# Returns: numpy array of calibrated probabilities, e.g., [0.12, 0.28, 0.72, ...]

📚 Documentation

💬 Community & Support

📖 Citation

If you use MCGrad in your research, please cite our paper.

@inproceedings{tax2026mcgrad,
  title={{MCGrad: Multicalibration at Web Scale}},
  author={Tax, Niek and Perini, Lorenzo and Linder, Fridolin and Haimovich, Daniel and Karamshuk, Dima and Okati, Nastaran and Vojnovic, Milan and Apostolopoulos, Pavlos Athanasios},
  booktitle={Proceedings of the 32nd ACM SIGKDD Conference on Knowledge Discovery and Data Mining V.1 (KDD 2026)},
  year={2026},
  doi={10.1145/3770854.3783954}
}

Related Publications

Some of our team's other work on multicalibration:

  • A New Metric to Measure Multicalibration: Guy, I., Haimovich, D., Linder, F., Okati, N., Perini, L., Tax, N., & Tygert, M. (2025). Measuring multi-calibration. arXiv:2506.11251.

  • Theoretical Results on Value of Multicalibration: Baldeschi, R. C., Di Gregorio, S., Fioravanti, S., Fusco, F., Guy, I., Haimovich, D., Leonardi, S., et al. (2025). Multicalibration yields better matchings. arXiv:2511.11413.

About

A collection of algorithm for multicalibration and metrics to quantify and measure multicalibration.

Resources

License

Code of conduct

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages

  • Python 96.1%
  • JavaScript 2.0%
  • CSS 1.4%
  • Other 0.5%