This repository contains the implementation of experiments from the paper titled "Simplifying Random Forests' Probabilistic Forecasts" by Nils Koster (KIT, Broad Institute of MIT & Harvard) and Fabian Krüger (KIT).
You can find the paper here (alt. on arXiv here).
Since their introduction by Breiman, Random Forests (RFs) have proven to be useful for both classification and regression tasks. The RF prediction of a previously unseen observation can be represented as a weighted sum of all training sample observations. This nearest-neighbor-type representation is useful, among other things, for constructing forecast distributions (Meinshausen, 2006). In this paper, we consider simplifying RF-based forecast distributions by sparsifying them. That is, we focus on a small subset of nearest neighbors while setting the remaining weights to zero. This sparsification step greatly improves the interpretability of RF predictions. It can be applied to any forecasting task without re-training existing RF models. In empirical experiments, we document that the simplified predictions can be similar to or exceed the original ones in terms of forecasting performance. We explore the statistical sources of this finding via a stylized analytical model of RFs. The model suggests that simplification is particularly promising if the unknown true forecast distribution contains many small weights that are estimated imprecisely.
To use Topk you can either clone the repository like so:
git clone https://github.com/kosnil/simplify_rf_dist.git
cd simplify_rf_distor install it via pip directly from the repository. We recommend using a virtual environment to avoid conflicts with other packages. Here is an example using conda:
conda create -n simplify_rf python=3.10
conda activate simplify_rf
pip install git+https://github.com/kosnil/simplify_rf_dist.gitThis repository contains the code used to generate the results in the paper. The code is written in Python and uses the libraries (as specified in requirements.txt):
joblib==1.4.0matplotlib==3.8.4numba==0.59.1numpy==1.26.4pandas==2.2.3scikit_learn==1.4.2scipy==1.13.0seaborn==0.13.2tqdm==4.66.2
This repository contains the code to replicate the results in the paper (including training, tuning and evaluation) and to apply Topk to your own data.
The main code that implements the Topk method can be found in the RF.py file. The code is organized in a modular way, so you can easily adapt it to your own needs. The main class is RandomForestWeight, which is a wrapper around the RandomForestRegressor class from sklearn. The class contains methods to train the model, predict the test set, and calculate the weights (independent of the choice of
A first starting point is the tutorial.ipynb notebook, which contains a minimal working example of how to use the code.
The basic workflow (assuming you installed the package via pip) is as follows:
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.datasets import make_regression
# Set seed
SEED = 7531
np.random.seed(SEED)
# Load dataset (in this case, we create a synthetic dataset)
X, y = make_regression(n_samples=5000, n_features=20, noise=1, random_state=SEED)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=SEED)
# Create a Topk RF class and train. The RandomForestWeight class is based on the RandomForestRegressor class from sklearn.
from TopkRF.RF import RandomForestWeight
# define hyperparameters for Topk RF
hyperparams = dict(
n_estimators=1000,
random_state=SEED,
n_jobs=-1,
max_features='sqrt',
min_samples_split=5,
)
rf = RandomForestWeight(hyperparams=hyperparams)
rf.fit(X_train, y_train)
# Predict the test set
k = 5
y_hat_k, w_k = rf.weight_predict(X_test, top_k=k, return_weights=True)To be more efficient, we calculate the weights in parallel using numba.
As these calculations take place in-memory, this can lead to memory issues for larger datasets. To avoid this, we recommend using the sparse versions of the functions. We refer to tutorial.ipynb for details.
Various tools for evaluating the forecasts are located in utils/score_utils.py, including the scoring rule implementations and the evaluation loopers.
utils/sparse_utils.py contains a few helper functions needed to process sparse weight matrices.
Similarly, utils/plotting_helpers.py contains functions that are helpful for plotting the results.
To replicate the results, figures and tables shown in the paper, you can check out the scripts in the files rf_restrict_k_openml.py, rf_hp_tuning.py, tuned_score_comparison and rf_soep.py. These scripts contain the code to train, tune, and evaluate models on the OpenML datasets as well as the SOEP dataset. Results, if run, are stored in the results/ directory. Due to their size, we do not include them here.
The notebook Theoretical Example/Toyexample.ipynb contains simulations and snippets that generate plots regarding Section 4 in the paper, stored in Theoretical Example/plots/.
simplify_rf_dist/
├── README.md
├── requirements.txt
├── TopkRF/
│ ├── RF.py
│ ├── utils/
│ │ ├── plotting_helpers.py
│ │ ├── score_utils.py
│ │ ├── sparse_utils.py
├── data/
│ ├── soep_prep/
│ │ ├── prepare_soep_data.R
│ └── ...
├── results/
├── Plots/
├── weight_storage/
├── tutorial.ipynb
├── rf_hp_tuning.py
├── rf_restrict_k_openml.py
├── rf_soep.py
├── tuned_score_comparison.py
├── Theoretical Example/
│ ├── Toyexample.ipynb
│ └── ...
└── LICENSE