Robust and efficient indoor localization using sparse semantic information from a spherical camera

Irem Uygur, Renato Miyagusuku, Sarthak Pathak, Alessandro Moro, Atsushi Yamashita, Hajime Asama

Producción científica: Contribución a una revistaArtículorevisión exhaustiva

6 Citas (Scopus)

Resumen

Self-localization enables a system to navigate and interact with its environment. In this study, we propose a novel sparse semantic self-localization approach for robust and efficient indoor localization. “Sparse semantic” refers to the detection of sparsely distributed objects such as doors and windows. We use sparse semantic information to self-localize on a human-readable 2D annotated map in the sensor model. Thus, compared to previous works using point clouds or other dense and large data structures, our work uses a small amount of sparse semantic information, which efficiently reduces uncertainty in real-time localization. Unlike complex 3D constructions, the annotated map required by our method can be easily prepared by marking the approximate centers of the annotated objects on a 2D map. Our approach is robust to the partial obstruction of views and geometrical errors on the map. The localization is performed using low-cost lightweight sensors, an inertial measurement unit and a spherical camera. We conducted experiments to show the feasibility and robustness of our approach.

Idioma originalInglés
Número de artículo4128
Páginas (desde-hasta)1-21
Número de páginas21
PublicaciónSensors (Switzerland)
Volumen20
N.º15
DOI
EstadoPublicada - ago. 2020
Publicado de forma externa

Huella

Profundice en los temas de investigación de 'Robust and efficient indoor localization using sparse semantic information from a spherical camera'. En conjunto forman una huella única.

Citar esto