Skip to content

[ECCV2022] FCAF3D: Fully Convolutional Anchor-Free 3D Object Detection

License

Notifications You must be signed in to change notification settings

SamsungLabs/fcaf3d

Repository files navigation

PWC PWC PWC

FCAF3D: Fully Convolutional Anchor-Free 3D Object Detection

News:

  • 🚀 June, 2023. We add ScanNet-pretrained S3DIS model and log significantly pushing forward state-of-the-art.
  • 🔥 February, 2023. Feel free to visit our new FCAF3D-based 3D instance segmentation TD3D and real-time 3D object detection TR3D.
  • 🔥 August, 2022. FCAF3D is now fully supported in mmdetection3d.
  • 🔥 July, 2022. Our paper is accepted at ECCV 2022.
  • 🔥 March, 2022. We have updated the preprint adding more comparison with fully convolutional GSDN baseline.
  • 🔥 December, 2021. FCAF3D is now state-of-the-art on paperswithcode on ScanNet, SUN RGB-D, and S3DIS.

This repository contains an implementation of FCAF3D, a 3D object detection method introduced in our paper:

FCAF3D: Fully Convolutional Anchor-Free 3D Object Detection
Danila Rukhovich, Anna Vorontsova, Anton Konushin
Samsung Research
https://arxiv.org/abs/2112.00322

Installation

For convenience, we provide a Dockerfile.

Alternatively, you can install all required packages manually. This implementation is based on mmdetection3d framework. Please refer to the original installation guide getting_started.md, replacing open-mmlab/mmdetection3d with samsunglabs/fcaf3d. Also, MinkowskiEngine and rotated_iou should be installed with these commands.

Most of the FCAF3D-related code locates in the following files: detectors/single_stage_sparse.py, dense_heads/fcaf3d_neck_with_head.py, backbones/me_resnet.py.

Getting Started

Please see getting_started.md for basic usage examples. We follow the mmdetection3d data preparation protocol described in scannet, sunrgbd, and s3dis. The only difference is that we do not sample 50,000 points from each point cloud in SUN RGB-D, using all points.

Training

To start training, run dist_train with FCAF3D configs:

bash tools/dist_train.sh configs/fcaf3d/fcaf3d_scannet-3d-18class.py 2

Testing

Test pre-trained model using dist_test with FCAF3D configs:

bash tools/dist_test.sh configs/fcaf3d/fcaf3d_scannet-3d-18class.py \
    work_dirs/fcaf3d_scannet-3d-18class/latest.pth 2 --eval mAP

Visualization

Visualizations can be created with test script. For better visualizations, you may set score_thr in configs to 0.20:

python tools/test.py configs/fcaf3d/fcaf3d_scannet-3d-18class.py \
    work_dirs/fcaf3d_scannet-3d-18class/latest.pth --eval mAP --show \
    --show-dir work_dirs/fcaf3d_scannet-3d-18class

Models

The metrics are obtained in 5 training runs followed by 5 test runs. We report both the best and the average values (the latter are given in round brackets).

For VoteNet and ImVoteNet, we provide the configs and checkpoints with our Mobius angle parametrization. For ImVoxelNet, please refer to the imvoxelnet repository as it is not currently supported in mmdetection3d for indoor datasets. Inference speed (scenes per second) is measured on a single NVidia GTX1080Ti. Please, note that ScanNet-pretrained S3DIS model was actually trained in the original open-mmlab/mmdetection3d codebase, so it can be inferenced only in their repo.

FCAF3D

Dataset mAP@0.25 mAP@0.5 Download
ScanNet 71.5 (70.7) 57.3 (56.0) model | log | config
SUN RGB-D 64.2 (63.8) 48.9 (48.2) model | log | config
S3DIS 66.7 (64.9) 45.9 (43.8) model | log | config
S3DIS
ScanNet-pretrained
75.7 (74.1) 58.2 (56.1) model | log | config

Faster FCAF3D on ScanNet

Backbone Voxel
size
mAP@0.25 mAP@0.5 Scenes
per sec.
Download
HDResNet34 0.01 70.7 56.0 8.0 see table above
HDResNet34:3 0.01 69.8 53.6 12.2 model | log | config
HDResNet34:2 0.02 63.1 46.8 31.5 model | log | config

VoteNet on SUN RGB-D

Source mAP@0.25 mAP@0.5 Download
mmdetection3d 59.1 35.8 instruction
ours 61.1 (60.5) 40.4 (39.5) model | log | config

ImVoteNet on SUN RGB-D

Source mAP@0.25 mAP@0.5 Download
mmdetection3d 64.0 37.8 instruction
ours 64.6 (64.1) 40.8 (39.8) model | log | config

Comparison with state-of-the-art on ScanNet

drawing

Example Detections

drawing

Citation

If you find this work useful for your research, please cite our paper:

@inproceedings{rukhovich2022fcaf3d,
  title={FCAF3D: fully convolutional anchor-free 3D object detection},
  author={Rukhovich, Danila and Vorontsova, Anna and Konushin, Anton},
  booktitle={European Conference on Computer Vision},
  pages={477--493},
  year={2022},
  organization={Springer}
}