Robust and efficient indoor localization using sparse semantic information from a spherical camera

Irem Uygur, Renato Miyagusuku, Sarthak Pathak, Alessandro Moro, Atsushi Yamashita, Hajime Asama

Research output: Contribution to journalArticlepeer-review

8 Scopus citations

Abstract

Self-localization enables a system to navigate and interact with its environment. In this study, we propose a novel sparse semantic self-localization approach for robust and efficient indoor localization. “Sparse semantic” refers to the detection of sparsely distributed objects such as doors and windows. We use sparse semantic information to self-localize on a human-readable 2D annotated map in the sensor model. Thus, compared to previous works using point clouds or other dense and large data structures, our work uses a small amount of sparse semantic information, which efficiently reduces uncertainty in real-time localization. Unlike complex 3D constructions, the annotated map required by our method can be easily prepared by marking the approximate centers of the annotated objects on a 2D map. Our approach is robust to the partial obstruction of views and geometrical errors on the map. The localization is performed using low-cost lightweight sensors, an inertial measurement unit and a spherical camera. We conducted experiments to show the feasibility and robustness of our approach.

Original languageEnglish
Article number4128
Pages (from-to)1-21
Number of pages21
JournalSensors (Switzerland)
Volume20
Issue number15
DOIs
StatePublished - Aug 2020
Externally publishedYes

Keywords

  • Crude maps
  • Indoor localization
  • Semantic localization

Fingerprint

Dive into the research topics of 'Robust and efficient indoor localization using sparse semantic information from a spherical camera'. Together they form a unique fingerprint.

Cite this