Skip to content

sayakpaul/MIRNet-TFLite-TRT

Repository files navigation

MIRNet-TFLite

This repository shows the TensorFlow Lite and TensorRT model conversion and inference processes for the MIRNet model as proposed by Learning Enriched Features for Real Image Restoration and Enhancement. This model is capable of enhancing low-light images upto a great extent.


Source

Model training code and pre-trained weights are provided by Soumik through this repository.

Comparison between the TensorFlow Lite and original models

TensorFlow Lite model (dynamic-range quantized)

Original model

About the notebooks

  • MIRNet_TFLite.ipynb: Shows the model conversion and inference processes. Models converted in this notebook support dynamic shaped inputs.
  • MIRNet_TFLite_Fixed_Shape.ipynb: Shows the model conversion and inference processes. Models converted in this notebook only support fixed shaped inputs.
  • MIRNet_TRT.ipynb: Shows the model conversion process with TensorRT as well as the inference. Recommended if you would run inference with an NVIDIA GPU-enabled environment.
  • Add_Metadata.ipynb: Adds metadata to TensorFlow Lite models. Metadata makes it easier for mobile developers to integrate the TensorFlow Lite models in their applications.

TensorFlow Lite models

Benchmarking

Pixel 4 was used in order to run the benchmarking tests. Also, fixed-shape TensorFlow Lite models (accepting 400x400x3 images) were only benchmarked.

Notes

If you would run inference with an NVIDIA GPU-enabled environment then please follow along with this notebook - MIRNet_TRT.ipynb. If you use the TensorRT optimized model (as shown in that notebook) with an NVIDIA GPU-enabled environment the inference latency greatly improves (~0.6 seconds on a Tesla T4). Here's a demo of running the TensorRT optimized model on a low-light video.