Video Lightening with Dedicated CNN Architecture

Li-Wen Wang, Wan-Chi Siu, Life-FIEEE, Zhi-Song Liu, Chu-Tak Li and  Daniel Pak-Kong Lun, SrMIEEE
The Hong Kong Polytechnic University, Hong Kong SAR, China

Supplimentary Results

Video Demostration:

The testing videos are from the Berkeley Deep Drive Dataset [21]. From the following videos we can see that LIME [27] and EnlightenGAN [11] produce visible noise which limits the visual quality. Retinex-Net [9] distorts the color information whose result looks like an artwork. Our proposed method (Ours) gives a clear result with good visual quality. Based on the temporal information, our method achieves better temporal consistency that is more stable than others.

Screen 1: Road

Low-light video

Play Video

LIME

Play Video

Retinex-Net

Play Video

EnlightenGAN

Play Video

Ours

Play Video

Screen 2: Alley

Low-light video

Play Video

LIME

Play Video

Retinex-Net

Play Video

EnlightenGAN

Play Video

Ours

Play Video

Reference

[9] Chen Wei, Wenjing Wang, Wenhan Yang and Jiaying Liu, “Deep retinex decomposition for low-light enhancement,” Proceedings, British Machine Vision Conference (BMVC), 2018, Newcastle, UK.

[11] Yifan Jiang, Xinyu Gong, Ding Liu, Yu Cheng, Chen Fang, Xiaohui Shen, Jianchao Yang, Pan Zhou and Zhangyang Wang, “EnlightenGAN: Deep Light Enhancement without Paired Supervision,” arXiv preprint arXiv:1906.06972, 2019.

[21] Fisher Yu, Wenqi Xian, Yingying Chen, Fangchen Liu, Mike Liao, Vashisht Madhavan and Trevor Darrell, “Bdd100k: A diverse driving video database with scalable annotation tooling,” arXiv preprint arXiv:1805.04687, 2018.

[27] Xiaojie Guo, Yu Li and Haibin Ling, “LIME: Low-light image enhancement via illumination map estimation,” IEEE transactions on image processing (TIP), vol. 26, no. 2, pp. 982-993, 2016.

@ARTICLE{DLN2020,
author={Li-Wen Wang and Zhi-Song Liu and Wan-Chi Siu and Daniel P.K. Lun},
journal={IEEE Transactions on Image Processing},
title={Lightening Network for Low-light Image Enhancement},
year={2020},
volume={29},
pages={7984-7996},
doi={10.1109/TIP.2020.3008396},
}