Skip to content

Official PyTorch implementation of the paper: "Solving ImageNet: a Unified Scheme for Training any Backbone to Top Results" (2022)

License

Notifications You must be signed in to change notification settings

Alibaba-MIIL/Solving_ImageNet

Repository files navigation

Solving ImageNet: a Unified Scheme for Training any Backbone to Top Results

Official PyTorch Implementation
Paper | Model Zoo | Speed-Accuracy Comparisons |

Tal Ridnik, Hussam Lawen, Emanuel Ben-Baruch, Asaf Noy
DAMO Academy, Alibaba Group

Abstract

ImageNet serves as the primary dataset for evaluating the quality of computer-vision models. The common practice today is training each architecture with a tailor-made scheme, designed and tuned by an expert. In this paper, we present a unified scheme for training any backbone on ImageNet. The scheme, named USI (Unified Scheme for ImageNet), is based on knowledge distillation and modern tricks. It requires no adjustments or hyper-parameters tuning between different models, and is efficient in terms of training times. We test USI on a wide variety of architectures, including CNNs, Transformers, Mobile-oriented and MLP-only. On all models tested, USI outperforms previous state-of-the-art results. Hence, we are able to transform training on ImageNet from an expert-oriented task to an automatic seamless routine. Since USI accepts any backbone and trains it to top results, it also enables to perform methodical comparisons, and identify the most efficient backbones along the speed-accuracy Pareto curve.

11/1/2023 Update

Added tests auto-generated by CodiumAI tool

How to Train on ImageNet with USI scheme

The proposed USI scheme does not require hyper-parameter tuning. The base training configuration works well for any backbone. All the results presented in the paper are fully reproducible.

First download teacher model weights from here

An example code - training ResNet50 model with USI:

python3 -u -m torch.distributed.launch --nproc_per_node=8 --nnodes=1 --node_rank=0 train.py  \
/mnt/Imagenet/
--model=resnet50
--kd_model_name=tresnet_l_v2
--kd_model_path=./tresnet_l_v2_83_9.pth

Some additional degrees of freedom that might be usefull:

  • Adjusting the batch size (defualt - 128): --batch-size=...
  • Training for more epochs (default - 300): --epochs=...

Acknowledgements

The training code is based on the excellent timm repository. Also, thanks EdgeNeXt authors for sharing their model.

Citation

@misc{https://doi.org/10.48550/arxiv.2204.03475,
  doi = {10.48550/ARXIV.2204.03475},  
  url = {https://arxiv.org/abs/2204.03475},  
  author = {Ridnik, Tal and Lawen, Hussam and Ben-Baruch, Emanuel and Noy, Asaf},  
  keywords = {Computer Vision and Pattern Recognition (cs.CV), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences},  
  title = {Solving ImageNet: a Unified Scheme for Training any Backbone to Top Results},  
  publisher = {arXiv},  
  year = {2022},  
}  

About

Official PyTorch implementation of the paper: "Solving ImageNet: a Unified Scheme for Training any Backbone to Top Results" (2022)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages