Skip to content
/ hardnet Public

Hardnet descriptor model - "Working hard to know your neighbor's margins: Local descriptor learning loss"

License

Notifications You must be signed in to change notification settings

DagnyT/hardnet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

HardNet model implementation

HardNet model implementation in PyTorch for NIPS 2017 paper "Working hard to know your neighbor's margins: Local descriptor learning loss" poster, slides

An example how to compile HardNet to Torchscript to be used in C++ code

Notebook

Update April 06 2018

We have added small shift and rot augmentation, which improves results up to 1mAP point on HPatches. It is in HardNet.py, turn on by --augmentation=True. All the weight will be updated soon. Version, which is trained on Brown + HPatches + PS datasets is in progress, stay tuned :)

Re: popular question about BoW retrieval engine

Unfortunately, it is proprietary and we cannot release it. But you can try the following open source repos, both Matlab-based:

Benchmark on HPatches, mAP

HPatches-results

Retrieval on Oxford5k, mAP, Hessian-Affine detector

Descriptor BoW BoW + SV BoW + SV + QE HQE + MA
TFeatLib 46.7 55.6 72.2 n/a
RootSIFT 55.1 63.0 78.4 88.0
L2NetLib+ 59.8 67.7 80.4 n/a
HardNetLibNIPS+ 59.8 68.6 83.0 88.2
HardNet++ 60.8 69.6 84.5 88.3
HesAffNet + HardNet++ 68.3 77.8 89.0 89.5

Requirements

Please use Python 2.7, install OpenCV and additional libraries from requirements.txt

Datasets and Training

To download datasets and start learning descriptor:

git clone https://github.com/DagnyT/hardnet
./code/run_me.sh

Logs are stored in tensorboard format in directory logs/

Pre-trained models

Pre-trained models can be found in folder pretrained.

3rd party pre-trained models

Rahul Mitra presented new large-scale patch PS-dataset and trained even better HardNet on it. Original weights in torch format are here.

Converted PyTorch version is here.

HardNet-Datasets-results

Which weights should I use?

For practical applications, we recommend HardNet++.

For comparison with other descriptors, which are trained on Liberty Brown dataset, we recommend HardNetLib+.

For the best descriptor, which is NOT trained on HPatches dataset, we recommend model by Mitra et.al., link in section above.

Usage example

We provide an example, how to describe patches with HardNet. Script expects patches in HPatches format, i.e. grayscale image with w = patch_size and h = n_patches * patch_size

cd examples
python extract_hardnet_desc_from_hpatches_file.py imgs/ref.png out.txt

or with Caffe:

cd examples/caffe
python extract_hardnetCaffe_desc_from_hpatches_file.py ../imgs/ref.png hardnet_caffe.txt

Projects, which use HardNet

AffNet -- learned local affine shape estimator.

Citation

Please cite us if you use this code:

@inproceedings{HardNet,
 author = {Anastasiya Mishchuk, Dmytro Mishkin, Filip Radenovic, Jiri Matas},
 title = "{Working hard to know your neighbor's margins: Local descriptor learning loss}",
 booktitle = {Proceedings of NeurIPS},
 year = 2017,
 month = dec
}