Skip to content

ismail31416/LumiNet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

30 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

The code is built on mdistiller

LumiNet: Perception-Driven Knowledge Distillation via Statistical Logit Calibration

Embarrassingly simple knowledge distillation method

PWC

Framework & Performance

image

CIFAR-100 Benchmark Results ( Same Architecture ):

Teacher
Student
ResNet56
ResNet20
ResNet110
ResNet32
ResNet32x4
ResNet8x4
WRN-40-2
WRN-16-2
WRN-40-2
WRN-40-1
VGG13
VGG8
KD 70.66 73.08 73.33 74.92 73.54 72.98
LumiNet 72.29 74.20 77.50 76.38 75.12 74.94

CIFAR-100 Benchmark Results (Heterogeneous architecture):

Teacher
Student
ResNet32x4
ShuffleNet-V1
WRN-40-2
ShuffleNet-V1
VGG13
MobileNet-V2
ResNet50
MobileNet-V2
ResNet32x4
ShuffleNet-V2
KD 74.07 74.83 67.37 67.35 74.45
LumiNet 76.66 76.95 70.50 70.97 77.55

ImageNet Benchmark Results:

On ImageNet:

Teacher
Student
ResNet34
ResNet18
ResNet50
MobileNet-V1
KD 71.03 70.50
LumiNet 72.16 72.55

Installation

Supported Environments:

  • Python version 3.6
  • PyTorch version 1.9.0
  • torchvision version 0.10.0

Install the package:

sudo pip3 install -r requirements.txt
sudo python3 setup.py develop

Getting Started

1. Wandb Integration for Logging

  • Register on Wandb: Wandb Registration.
  • To opt-out of Wandb logging, set CFG.LOG.WANDB to False in mdistiller/engine/cfg.py.

2. Evaluation

  • Evaluate the performance of provided or custom-trained models.

  • Download our models from this link and save the checkpoints in ./download_ckpts.

  • For ImageNet testing, download the dataset from ImageNet and place it in ./data/imagenet.

    # Evaluate teachers
    python3 tools/eval.py -m resnet32x4 # resnet32x4 on cifar100
    python3 tools/eval.py -m ResNet34 -d imagenet # ResNet34 on imagenet
    
    # Evaluate students
    python3 tools/eval.p -m resnet8x4 -c download_ckpts/luminet_resnet8x4 # luminet-resnet8x4 on cifar100
    python3 tools/eval.p -m MobileNetV1 -c download_ckpts/imgnet_luminet_mv1 -d imagenet # luminet-mv1 on imagenet
    python3 tools/eval.p -m model_name -c output/your_exp/student_best # your checkpoints

3. Training on CIFAR-100

  • Download the cifar_teachers.tar at https://github.com/megvii-research/mdistiller/releases/tag/checkpoints and untar it to ./download_ckpts via tar xvf cifar_teachers.tar.

    # for instance, our LumiNet method.
    python3 tools/train.py --cfg configs/cifar100/luminet/res32x4_res8x4.yaml
    
    # you can also change settings at command line
    python3 tools/train.py --cfg configs/cifar100/luminet/res32x4_res8x4.yaml SOLVER.BATCH_SIZE 128 SOLVER.LR 0.1

4. Training on ImageNet

  • Download the dataset at https://image-net.org/ and put them to ./data/imagenet

    # for instance, our LumiNet method.
    python3 tools/train.py --cfg configs/imagenet/r34_r18/luminet.yaml

5. Training on MS-COCO ( This part will be released soon)

6. Extension: Visualizations

Citation

If this repo is helpful for your research, please consider citing the paper:

hossain2025luminet,
title={LumiNet: Perception-Driven Knowledge Distillation via Statistical Logit Calibration},
author={Md. Ismail Hossain and M M Lutfe Elahi and Sameera Ramasinghe and Ali Cheraghian and Fuad Rahman and Nabeel Mohammed and Shafin Rahman},
journal={Transactions on Machine Learning Research},
issn={2835-8856},
year={2025},
url={https://openreview.net/forum?id=3rU1lp9w2l},
note={}
}

Acknowledgement

  • We would like to extend our sincere appreciation to the contributors of mdistiller for their dedicated efforts and significant contributions.

About

The official (TMLR) implementation of LumiNet: Perception-Driven Knowledge Distillation via Statistical Logit Calibration

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors