Abstract
We present a novel image-based indoor localisation and navigation application framework with augmented reality devices, specifically android based wearable devices. Our framework provides a robust location and navigation features in an unknown indoor environment using AR device as the visual sensor. We have used Google Glass(GG) for for conducting experiments and to demonstrate proof of concept. The GG’s limited computational power is overcome by performing all computations at back-end; GG is only used for communicating with back-end and as a sensing and display device. We use visual odometry and inertial sensors for dynamic map creation. We use SURF[1] with Bag of Visual Words(BoVW)[14] aggregation for image matching to find loop closure for revisited landmarks in the scene. The application framework generates a topological map at run-time and displays it on GG screen. The potential use cases of our work include the virtual tours in an industry setting, museums and art galleries, and research facility.
Original language | English |
---|---|
Journal | IEEE ISMAR |
Publication status | Published - 2016 |