The testing videos are from the Berkeley Deep Drive Dataset . From the following videos we can see that LIME  and EnlightenGAN  produce visible noise which limits the visual quality. Retinex-Net  distorts the color information whose result looks like an artwork. Our proposed method (Ours) gives a clear result with good visual quality. Based on the temporal information, our method achieves better temporal consistency that is more stable than others.
 Chen Wei, Wenjing Wang, Wenhan Yang and Jiaying Liu, “Deep retinex decomposition for low-light enhancement,” Proceedings, British Machine Vision Conference (BMVC), 2018, Newcastle, UK.
 Yifan Jiang, Xinyu Gong, Ding Liu, Yu Cheng, Chen Fang, Xiaohui Shen, Jianchao Yang, Pan Zhou and Zhangyang Wang, “EnlightenGAN: Deep Light Enhancement without Paired Supervision,” arXiv preprint arXiv:1906.06972, 2019.
 Fisher Yu, Wenqi Xian, Yingying Chen, Fangchen Liu, Mike Liao, Vashisht Madhavan and Trevor Darrell, “Bdd100k: A diverse driving video database with scalable annotation tooling,” arXiv preprint arXiv:1805.04687, 2018.
 Xiaojie Guo, Yu Li and Haibin Ling, “LIME: Low-light image enhancement via illumination map estimation,” IEEE transactions on image processing (TIP), vol. 26, no. 2, pp. 982-993, 2016.