Skip to content

[CVPR 2022] Self-supervised Image-specific Prototype Exploration for Weakly Supervised Semantic Segmentation

License

Notifications You must be signed in to change notification settings

chenqi1126/SIPE

Repository files navigation

Self-supervised Image-specific Prototype Exploration for Weakly Supervised Semantic Segmentation (SIPE)

framework

The implementation of Self-supervised Image-specific Prototype Exploration for Weakly Supervised Semantic Segmentation, Qi Chen, Lingxiao Yang, Jianhuang Lai, and Xiaohua Xie, CVPR 2022.

Abstract

Weakly Supervised Semantic Segmentation (WSSS) based on image-level labels has attracted much attention due to low annotation costs. Existing methods often rely on Class Activation Mapping (CAM) that measures the correlation between image pixels and classifier weight. However, the classifier focuses only on the discriminative regions while ignoring other useful information in each image, resulting in incomplete localization maps. To address this issue, we propose a Self-supervised Image-specific Prototype Exploration (SIPE) that consists of an Image-specific Prototype Exploration (IPE) and a General-Specific Consistency (GSC) loss. Specifically, IPE tailors prototypes for every image to capture complete regions, formed our Image-Specific CAM (IS-CAM), which is realized by two sequential steps. In addition, GSC is proposed to construct the consistency of general CAM and our specific IS-CAM, which further optimizes the feature representation and empowers a self-correction ability of prototype exploration. Extensive experiments are conducted on PASCAL VOC 2012 and MS COCO 2014 segmentation benchmark and results show our SIPE achieves new state-of-the-art performance using only image-level labels.

Environment

  • Python >= 3.6.6
  • Pytorch >= 1.6.0
  • Torchvision

Usage

Step 1. Prepare Dataset

Step 2. Train SIPE

# PASCAL VOC 2012
bash run_voc.sh

# MS COCO 2014
bash run_coco.sh

Step 3. Train Fully Supervised Segmentation Models

To train fully supervised segmentation models, we refer to deeplab-pytorch and seamv1.

Results

Localization maps

Dataset Model mIoU (Train) Weight Training log
PASCAL VOC 2012 CVPR submit 58.65 Download Logfile
PASCAL VOC 2012 This repo 58.88 Download Logfile
MS COCO 2014 CVPR submit 34.41 Download Logfile
MS COCO 2014 This repo 35.05 Download Logfile

Segmentation maps

Dataset Model mIoU (Val) mIoU (Test) Weight
PASCAL VOC 2012 WideResNet38 68.2 69.5 Download
PASCAL VOC 2012 ResNet101 68.8 69.7 Download
MS COCO 2014 WideResNet38 43.6 - Download
MS COCO 2014 ResNet101 40.6 - Download

Citation

@InProceedings{Chen_2022_CVPR_SIPE,
    author    = {Chen, Qi and Yang, Lingxiao and Lai, Jian-Huang and Xie, Xiaohua},
    title     = {Self-Supervised Image-Specific Prototype Exploration for Weakly Supervised Semantic Segmentation},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2022},
    pages     = {4288-4298}
}

About

[CVPR 2022] Self-supervised Image-specific Prototype Exploration for Weakly Supervised Semantic Segmentation

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published