Skip to content
/ 2DPASS Public

2DPASS: 2D Priors Assisted Semantic Segmentation on LiDAR Point Clouds (ECCV 2022) πŸ”₯

License

Notifications You must be signed in to change notification settings

yanx27/2DPASS

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

30 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

PWCPWC

2DPASS

arXiv GitHub Stars visitors

This repository is for 2DPASS introduced in the following paper

Xu Yan*, Jiantao Gao*, Chaoda Zheng*, Chao Zheng, Ruimao Zhang, Shuguang Cui, Zhen Li*, "2DPASS: 2D Priors Assisted Semantic Segmentation on LiDAR Point Clouds", ECCV 2022 [arxiv]. image

If you find our work useful in your research, please consider citing:

@inproceedings{yan20222dpass,
  title={2dpass: 2d priors assisted semantic segmentation on lidar point clouds},
  author={Yan, Xu and Gao, Jiantao and Zheng, Chaoda and Zheng, Chao and Zhang, Ruimao and Cui, Shuguang and Li, Zhen},
  booktitle={European Conference on Computer Vision},
  pages={677--695},
  year={2022},
  organization={Springer}
}

@InProceedings{yan2022let,
      title={Let Images Give You More: Point Cloud Cross-Modal Training for Shape Analysis}, 
      author={Xu Yan and Heshen Zhan and Chaoda Zheng and Jiantao Gao and Ruimao Zhang and Shuguang Cui and Zhen Li},
      year={2022},
      booktitle={NeurIPS}
}

@article{yan2023benchmarking,
  title={Benchmarking the Robustness of LiDAR Semantic Segmentation Models},
  author={Yan, Xu and Zheng, Chaoda and Li, Zhen and Cui, Shuguang and Dai, Dengxin},
  journal={arXiv preprint arXiv:2301.00970},
  year={2023}
}

News

  • 2023-04-01 We merge MinkowskiNet and official SPVCNN models from SPVNAS in our codebase. You can check these models in config/. We rename our baseline model from spvcnn.py to baseline.py.
  • 2023-03-31 We provide codes for the robustness evaluation on SemanticKITTI-C.
  • 2023-03-27 We release a model with higher performance on SemanticKITTI and codes for naive instance augmentation.
  • 2023-02-25 We release a new robustness benchmark for LiDAR semantic segmentation at SemanticKITTI-C. Welcome to test your models!

  • 2022-10-11 Our new work for cross-modal knowledge distillation is accepted at NeurIPS 2022:smiley: paper / code.
  • 2022-09-20 We release codes for SemanticKITTI single-scan and NuScenes πŸš€!
  • 2022-07-03 2DPASS is accepted at ECCV 2022 πŸ”₯!
  • 2022-03-08 We achieve 1st place in both single and multi-scans of SemanticKITTI and 3rd place on NuScenes-lidarseg πŸ”₯!

Installation

Requirements

Data Preparation

SemanticKITTI

Please download the files from the SemanticKITTI website and additionally the color data from the Kitti Odometry website. Extract everything into the same folder.

./dataset/
β”œβ”€β”€ 
β”œβ”€β”€ ...
└── SemanticKitti/
    β”œβ”€β”€sequences
        β”œβ”€β”€ 00/           
        β”‚   β”œβ”€β”€ velodyne/	
        |   |	β”œβ”€β”€ 000000.bin
        |   |	β”œβ”€β”€ 000001.bin
        |   |	└── ...
        β”‚   └── labels/ 
        |   |   β”œβ”€β”€ 000000.label
        |   |   β”œβ”€β”€ 000001.label
        |   |   └── ...
        |   └── image_2/ 
        |   |   β”œβ”€β”€ 000000.png
        |   |   β”œβ”€β”€ 000001.png
        |   |   └── ...
        |   calib.txt
        β”œβ”€β”€ 08/ # for validation
        β”œβ”€β”€ 11/ # 11-21 for testing
        └── 21/
	    └── ...

NuScenes

Please download the Full dataset (v1.0) from the NuScenes website with lidarseg and extract it.

./dataset/
β”œβ”€β”€ 
β”œβ”€β”€ ...
└── nuscenes/
    β”œβ”€β”€v1.0-trainval
    β”œβ”€β”€v1.0-test
    β”œβ”€β”€samples
    β”œβ”€β”€sweeps
    β”œβ”€β”€maps
    β”œβ”€β”€lidarseg

Training

SemanticKITTI

You can run the training with

cd <root dir of this repo>
python main.py --log_dir 2DPASS_semkitti --config config/2DPASS-semantickitti.yaml --gpu 0

The output will be written to logs/SemanticKITTI/2DPASS_semkitti by default.

NuScenes

cd <root dir of this repo>
python main.py --log_dir 2DPASS_nusc --config config/2DPASS-nuscenese.yaml --gpu 0 1 2 3

Vanilla Training without 2DPASS

We take SemanticKITTI as an example.

cd <root dir of this repo>
python main.py --log_dir baseline_semkitti --config config/2DPASS-semantickitti.yaml --gpu 0 --baseline_only

Testing

You can run the testing with

cd <root dir of this repo>
python main.py --config config/2DPASS-semantickitti.yaml --gpu 0 --test --num_vote 12 --checkpoint <dir for the pytorch checkpoint>

Here, num_vote is the number of views for the test-time-augmentation (TTA). We set this value to 12 as default (on a Tesla-V100 GPU), and if you use other GPUs with smaller memory, you can choose a smaller value. num_vote=1 denotes there is no TTA used, and will cause about ~2% performance drop.

Robustness Evaluation

Please download all subsets of SemanticKITTI-C from this link and extract them.

./dataset/
β”œβ”€β”€ 
β”œβ”€β”€ ...
└── SemanticKitti/
    β”œβ”€β”€sequences
    β”œβ”€β”€SemanticKITTI-C
        β”œβ”€β”€ clean_data/           
        β”œβ”€β”€ dense_16beam/           
        β”‚   β”œβ”€β”€ velodyne/	
        |   |	β”œβ”€β”€ 000000.bin
        |   |	β”œβ”€β”€ 000001.bin
        |   |	└── ...
        β”‚   └── labels/ 
        |   |   β”œβ”€β”€ 000000.label
        |   |   β”œβ”€β”€ 000001.label
        |   |   └── ...
	    ...

You can run the robustness evaluation with

cd <root dir of this repo>
python robust_test.py --config config/2DPASS-semantickitti.yaml --gpu 0  --num_vote 12 --checkpoint <dir for the pytorch checkpoint>

Model Zoo

You can download the models with the scores below from this Google drive folder.

SemanticKITTI

Model (validation) mIoU (vanilla) mIoU (TTA) Parameters
MinkowskiNet 65.1% 67.1% 21.7M
SPVCNN 65.9% 67.8% 21.8M
2DPASS (4scale-64dimension) 68.7% 70.0% 1.9M
2DPASS (6scale-256dimension) 70.7% 72.0% 45.6M

Here, we fine-tune 2DPASS models on SemanticKITTI with more epochs and thus gain the higher mIoU. If you train with 64 epochs, it should be gained about 66%/69% for vanilla and 69%/71% after TTA.

NuScenes

Model (validation) mIoU (vanilla) mIoU (TTA) Parameters
MinkowskiNet 74.3% 76.0% 21.7M
SPVCNN 74.9% 76.9% 21.8M
2DPASS (6scale-128dimension) 76.7% 79.6% 11.5M
2DPASS (6scale-256dimension) 78.0% 80.5% 45.6M

Note that the results on benchmarks are gained by training with additional validation set and using instance-level augmentation.

Acknowledgements

Code is built based on SPVNAS, Cylinder3D, xMUDA and SPCONV.

License

This repository is released under MIT License (see LICENSE file for details).