Skip to content

Latest commit

 

History

History

libra_rcnn

Libra R-CNN

Libra R-CNN: Towards Balanced Learning for Object Detection

Abstract

Compared with model architectures, the training process, which is also crucial to the success of detectors, has received relatively less attention in object detection. In this work, we carefully revisit the standard training practice of detectors, and find that the detection performance is often limited by the imbalance during the training process, which generally consists in three levels - sample level, feature level, and objective level. To mitigate the adverse effects caused thereby, we propose Libra R-CNN, a simple but effective framework towards balanced learning for object detection. It integrates three novel components: IoU-balanced sampling, balanced feature pyramid, and balanced L1 loss, respectively for reducing the imbalance at sample, feature, and objective level. Benefitted from the overall balanced design, Libra R-CNN significantly improves the detection performance. Without bells and whistles, it achieves 2.5 points and 2.0 points higher Average Precision (AP) than FPN Faster R-CNN and RetinaNet respectively on MSCOCO.

Instance recognition is rapidly advanced along with the developments of various deep convolutional neural networks. Compared to the architectures of networks, the training process, which is also crucial to the success of detectors, has received relatively less attention. In this work, we carefully revisit the standard training practice of detectors, and find that the detection performance is often limited by the imbalance during the training process, which generally consists in three levels - sample level, feature level, and objective level. To mitigate the adverse effects caused thereby, we propose Libra R-CNN, a simple yet effective framework towards balanced learning for instance recognition. It integrates IoU-balanced sampling, balanced feature pyramid, and objective re-weighting, respectively for reducing the imbalance at sample, feature, and objective level. Extensive experiments conducted on MS COCO, LVIS and Pascal VOC datasets prove the effectiveness of the overall balanced design.

Results and Models

The results on COCO 2017val are shown in the below table. (results on test-dev are usually slightly higher than val)

Architecture Backbone Style Lr schd Mem (GB) Inf time (fps) box AP Config Download
Faster R-CNN R-50-FPN pytorch 1x 4.6 19.0 38.3 config model | log
Fast R-CNN R-50-FPN pytorch 1x
Faster R-CNN R-101-FPN pytorch 1x 6.5 14.4 40.1 config model | log
Faster R-CNN X-101-64x4d-FPN pytorch 1x 10.8 8.5 42.7 config model | log
RetinaNet R-50-FPN pytorch 1x 4.2 17.7 37.6 config model | log

Citation

We provide config files to reproduce the results in the CVPR 2019 paper Libra R-CNN.

The extended version of Libra R-CNN is accpeted by IJCV.

@inproceedings{pang2019libra,
  title={Libra R-CNN: Towards Balanced Learning for Object Detection},
  author={Pang, Jiangmiao and Chen, Kai and Shi, Jianping and Feng, Huajun and Ouyang, Wanli and Dahua Lin},
  booktitle={IEEE Conference on Computer Vision and Pattern Recognition},
  year={2019}
}

@article{pang2021towards,
  title={Towards Balanced Learning for Instance Recognition},
  author={Pang, Jiangmiao and Chen, Kai and Li, Qi and Xu, Zhihai and Feng, Huajun and Shi, Jianping and Ouyang, Wanli and Lin, Dahua},
  journal={International Journal of Computer Vision},
  volume={129},
  number={5},
  pages={1376--1393},
  year={2021},
  publisher={Springer}
}