Skip to content

Image Captions Generation with Spatial and Channel-wise Attention

Notifications You must be signed in to change notification settings

zjuchenlong/sca-cnn.cvpr17

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SCA-CNN

Source code for the paper: SCA-CNN: Spatial and Channel-wise Attention in Convolution Networks for Imgae Captioning

This code is based on arctic-captions and arctic-capgen-vid.

This code is only for two-layered attention model in ResNet-152 Network for MS COCO dataset. Other networks (VGG-19) or datasets (Flickr30k/Flickr8k) can also be used with minor modifications.

Dependencies

  • A python library: Theano.

  • Other python package dependencies like numpy/scipy, skimage, opencv, sklearn, hdf5 which can be installed by pip, or simply run

    $ pip install -r requirements.txt
    
  • Caffe for image CNN feature extraction. You should install caffe and building the pycaffe interface to extract the image CNN feature.

  • The official coco evaluation scrpits coco-caption for results evaluation. Install it by simply adding it into $PYTHONPATH.

Getting Started

  1. Get the code $ git clone the repo and install the dependencies

  2. Save the pretrained CNN weights Save the ResNet-152 weights pretrained on ImageNet. Before running the code, set the variable deploy and model in save_resnet_weight.py to your own path. Then run:

$ cd cnn
$ python save_resnet_weight.py
  1. Preprocessing the dataset For the preprocessing of captioning, we directly use the processed JSON blob from neuraltalk. Similar to step 2, set the PATH in cnn_until.py and make_coco.py to your own install path. Then run:
$ cd data
$ python make_coco.py
  1. Training The results are saved in the directory exp.
$ THEANO_FLAGS=mode=FAST_RUN,device=gpu,floatX=float32 python sca_resnet_branch2b.py

Citation

If you find this code useful, please cite the following paper:

@inproceedings{chen2016sca,
  title={SCA-CNN: Spatial and Channel-wise Attention in Convolutional Networks for Image Captioning},
  author={Chen, Long and Zhang, Hanwang and Xiao, Jun and Nie, Liqiang and Shao, Jian and Liu, Wei and Chua, Tat-Seng},
  booktitle={CVPR},
  year={2017}
}

About

Image Captions Generation with Spatial and Channel-wise Attention

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages