Resumen
Self-localization enables a system to navigate and interact with its environment. In this study, we propose a novel sparse semantic self-localization approach for robust and efficient indoor localization. “Sparse semantic” refers to the detection of sparsely distributed objects such as doors and windows. We use sparse semantic information to self-localize on a human-readable 2D annotated map in the sensor model. Thus, compared to previous works using point clouds or other dense and large data structures, our work uses a small amount of sparse semantic information, which efficiently reduces uncertainty in real-time localization. Unlike complex 3D constructions, the annotated map required by our method can be easily prepared by marking the approximate centers of the annotated objects on a 2D map. Our approach is robust to the partial obstruction of views and geometrical errors on the map. The localization is performed using low-cost lightweight sensors, an inertial measurement unit and a spherical camera. We conducted experiments to show the feasibility and robustness of our approach.
Idioma original | Inglés |
---|---|
Número de artículo | 4128 |
Páginas (desde-hasta) | 1-21 |
Número de páginas | 21 |
Publicación | Sensors (Switzerland) |
Volumen | 20 |
N.º | 15 |
DOI | |
Estado | Publicada - ago. 2020 |
Publicado de forma externa | Sí |