Data Fusion for Sparse Semantic Localization Based on Object Detection

Irem Uygur, Renato Miyagusuku, Sarthak Pathak, Hajime Asama, Atsushi Yamashita

Producción científica: Contribución a una revistaArtículorevisión exhaustiva

Resumen

Semantic information has started to be used in localization methods to introduce a non-geometric distinction in the environment. However, efficient ways to integrate this information remain a question. We pro-pose an approach for fusing data from different object classes by analyzing the posterior for each object class to improve robustness and accuracy for self-localization. Our system uses the bearing angle to the objects’ center and objects’ class names as sensor model input to localize the user on a 2D annotated map consisting of objects’ class names and center coordinates. Sensor model input is obtained by an object detector on equirectangular images of a 360° field of view camera. As object detection performance varies based on location and object class, different object classes generate different likelihoods. We account for this by using appropriate weights generated by a Gaussian process model trained by using our posterior analysis. Our approach follows a systematic way to fuse data from different object classes and use them as a likelihood function of a Monte Carlo localization (MCL) algorithm.

Idioma originalInglés
Páginas (desde-hasta)375-387
Número de páginas13
PublicaciónJournal of Robotics and Mechatronics
Volumen36
N.º2
DOI
EstadoPublicada - abr. 2024
Publicado de forma externa

Huella

Profundice en los temas de investigación de 'Data Fusion for Sparse Semantic Localization Based on Object Detection'. En conjunto forman una huella única.

Citar esto