Resumen
Typical procedures for estimating soot volume fraction distribution in laboratory flames require solving ill-posed inverse problems to recover the fields from convoluted signals that integrate light extinction from soot particles along the line-of-sight of a photo-detector. Classical deconvolution methods are highly sensitive to noise and the choice of tunable regularization parameters, which prevents obtaining consistent estimations even for the same reference flame settings. This paper presents a novel approach based on Convolutional Neural Networks (CNNs) for estimating the soot volume fraction fields from 2D images of line-of-sight attenuation (LOSA) measurements in coflow laminar axisymmetric diffusion flames. Using a set of reference synthetic soot volume fraction fields of canonical flames and their corresponding projected LOSA images, we trained a CNN for reconstructing soot fields from images representing the data captured by a camera. Experimental results show that the proposed CNN approach outperforms classical deconvolution methods when reconstructing the flame spatial soot distribution from noisy images of LOSA.
Idioma original | Inglés |
---|---|
Número de artículo | 119011 |
Publicación | Fuel |
Volumen | 285 |
DOI | |
Estado | Publicada - 1 feb. 2021 |
Publicado de forma externa | Sí |