Indoor topological localization using a visual landmark sequence

This paper presents a novel indoor topological localization method based on mobile phone videos. Conventional methods suffer from indoor dynamic environmental changes and scene ambiguity. The proposed Visual Landmark Sequence-based Indoor Localization (VLSIL) method is capable of addressing problems...

Full description

Bibliographic Details
Main Authors: Zhu, Jiasong, Li, Qing, Cao, Rui, Sun, Ke, Liu, Tao, Garibaldi, Jonathan, Li, Qingquan, Liu, Bozhi, Qiu, Guoping
Format: Article
Language:English
Published: MDPI 2019
Subjects:
Online Access:https://eprints.nottingham.ac.uk/56196/
_version_ 1848799291185823744
author Zhu, Jiasong
Li, Qing
Cao, Rui
Sun, Ke
Liu, Tao
Garibaldi, Jonathan
Li, Qingquan
Liu, Bozhi
Qiu, Guoping
author_facet Zhu, Jiasong
Li, Qing
Cao, Rui
Sun, Ke
Liu, Tao
Garibaldi, Jonathan
Li, Qingquan
Liu, Bozhi
Qiu, Guoping
author_sort Zhu, Jiasong
building Nottingham Research Data Repository
collection Online Access
description This paper presents a novel indoor topological localization method based on mobile phone videos. Conventional methods suffer from indoor dynamic environmental changes and scene ambiguity. The proposed Visual Landmark Sequence-based Indoor Localization (VLSIL) method is capable of addressing problems by taking steady indoor objects as landmarks. Unlike many feature or appearance matching-based localization methods, our method utilizes highly abstracted landmark sematic information to represent locations and thus is invariant to illumination changes, temporal variations, and occlusions. We match consistently detected landmarks against the topological map based on the occurrence order in the videos. The proposed approach contains two components: a convolutional neural network (CNN)-based landmark detector and a topological matching algorithm. The proposed detector is capable of reliably and accurately detecting landmarks. The other part is the matching algorithm built on the second order hidden Markov model and it can successfully handle the environmental ambiguity by fusing sematic and connectivity information of landmarks. To evaluate the method, we conduct extensive experiments on the real world dataset collected in two indoor environments, and the results show that our deep neural network-based indoor landmark detector accurately detects all landmarks and is expected to be utilized in similar environments without retraining and that VLSIL can effectively localize indoor landmarks.
first_indexed 2025-11-14T20:33:20Z
format Article
id nottingham-56196
institution University of Nottingham Malaysia Campus
institution_category Local University
language English
last_indexed 2025-11-14T20:33:20Z
publishDate 2019
publisher MDPI
recordtype eprints
repository_type Digital Repository
spelling nottingham-561962021-03-19T07:54:56Z https://eprints.nottingham.ac.uk/56196/ Indoor topological localization using a visual landmark sequence Zhu, Jiasong Li, Qing Cao, Rui Sun, Ke Liu, Tao Garibaldi, Jonathan Li, Qingquan Liu, Bozhi Qiu, Guoping This paper presents a novel indoor topological localization method based on mobile phone videos. Conventional methods suffer from indoor dynamic environmental changes and scene ambiguity. The proposed Visual Landmark Sequence-based Indoor Localization (VLSIL) method is capable of addressing problems by taking steady indoor objects as landmarks. Unlike many feature or appearance matching-based localization methods, our method utilizes highly abstracted landmark sematic information to represent locations and thus is invariant to illumination changes, temporal variations, and occlusions. We match consistently detected landmarks against the topological map based on the occurrence order in the videos. The proposed approach contains two components: a convolutional neural network (CNN)-based landmark detector and a topological matching algorithm. The proposed detector is capable of reliably and accurately detecting landmarks. The other part is the matching algorithm built on the second order hidden Markov model and it can successfully handle the environmental ambiguity by fusing sematic and connectivity information of landmarks. To evaluate the method, we conduct extensive experiments on the real world dataset collected in two indoor environments, and the results show that our deep neural network-based indoor landmark detector accurately detects all landmarks and is expected to be utilized in similar environments without retraining and that VLSIL can effectively localize indoor landmarks. MDPI 2019-01-03 Article PeerReviewed application/pdf en cc_by https://eprints.nottingham.ac.uk/56196/1/Indoor-topological-localization-using-a-visual-landmark-sequence2019Remote-SensingOpen-Access.pdf Zhu, Jiasong, Li, Qing, Cao, Rui, Sun, Ke, Liu, Tao, Garibaldi, Jonathan, Li, Qingquan, Liu, Bozhi and Qiu, Guoping (2019) Indoor topological localization using a visual landmark sequence. Remote Sensing, 11 (1). p. 73. ISSN 2072-4292 visual landmark sequence; indoor topological localization; onvolutional neural network(CNN); second order hidden Markov model https://www.mdpi.com/2072-4292/11/1/73 doi:10.3390/rs11010073 doi:10.3390/rs11010073
spellingShingle visual landmark sequence; indoor topological localization; onvolutional neural network(CNN); second order hidden Markov model
Zhu, Jiasong
Li, Qing
Cao, Rui
Sun, Ke
Liu, Tao
Garibaldi, Jonathan
Li, Qingquan
Liu, Bozhi
Qiu, Guoping
Indoor topological localization using a visual landmark sequence
title Indoor topological localization using a visual landmark sequence
title_full Indoor topological localization using a visual landmark sequence
title_fullStr Indoor topological localization using a visual landmark sequence
title_full_unstemmed Indoor topological localization using a visual landmark sequence
title_short Indoor topological localization using a visual landmark sequence
title_sort indoor topological localization using a visual landmark sequence
topic visual landmark sequence; indoor topological localization; onvolutional neural network(CNN); second order hidden Markov model
url https://eprints.nottingham.ac.uk/56196/
https://eprints.nottingham.ac.uk/56196/
https://eprints.nottingham.ac.uk/56196/