Light-Field Imaging Reconstruction Using Deep Learning Enabling Intelligent Autonomous Transportation System

Juan Casavilca Silva, Muhammad Saadi, Lunchakorn Wuttisittikulkij, Davi Ribeiro Militani, Renata L. Rosa, Demóstenes Zegarra Rodríguez, Sattam Al Otaibi

Producción científica: Contribución a una revistaArtículorevisión exhaustiva

5 Citas (Scopus)

Resumen

Light-field (LF) cameras, also known as plenoptic cameras, permit the recording of the 4D LF distribution of target scenes. However, many times, surface errors of a microlens array (MLA) are responsible for degradation in the images captured by a plenoptic camera. Additionally, the limited pixel count of the sensor can cause missing parallax information. The aforementioned issues are crucial for creating accurate maps for Intelligent Autonomous Transport System (IATS), because they cause loss of LF information, and need to be addressed. To tackle this problem, a learning-based framework by directly simulating the LF distribution is proposed. A high-dimensional convolution layer with densely sampled LFs in 4D space and considering a soft activation function based on ReLU segmentation correction is used to generate a superresolution (SR) LF image, improving the convergence rate in the deep learning network. Experimental results show that our proposed LF image reconstruction framework outperforms the existing state-of-the-art approaches; specifically, it is effective for learning the LF distribution and generating high-quality LF images. Different image quality assessment methods are used to evaluate the performance of the proposed framework, such as PSNR, SSIM, IWSSIM, FSIM, GFM, MDFM, and HDR-VDP. Additionally, the computational efficiency was evaluated in terms of number of parameters and FLOPs, and experimental results demonstrated that our proposed framework reached the highest performance in most of the datasets used.
Idioma originalEspañol
PublicaciónIEEE Transactions on Intelligent Transportation Systems
EstadoPublicada - 1 ene. 2021

Citar esto