Assistive Navigation Systems using Minimalistic Sensor Data
Approach and Motivation
Whereas sighted people primarily use vision to navigate, individuals with Vision Impairments (VI) employ compensatory senses (e.g., touch), resulting in reduced mobility. To address this issue, a number of navigation systems have been developed. Outdoor systems rely upon GPS, which is not available indoors. Indoor solutions often require physical augmentation of the environment, such as RFID tags. Though RFID tags are cheap, they are typically embedded in floors with a resolution of several tags per square feet. Though embedding tags in carpets is feasible, hallways or large open spaces often have tiles or concrete floors which makes installing tags prohibitively expensive. Alternatives utilize sophisticated sensors, such as cameras or laser-rangefinders, which are expensive and may impede mobility due to weight.
This work proposes a low-cost solution that does not require physical augmentation and depends on affordable, light-weight sensors, such as a pedometer and a compass, that are already available on popular devices, such as smart phones. The approach utilizes interaction with the user through an audio/speech interface to provide directions using landmarks that are recognizable by individuals with VI, such as doors, hallway intersections and floor transitions. The user confirms the presence of the landmarks along the provided path through a smart phone, based on the Android OS. This allows the system to track the user’s location by using the sensor readings, knowledge of the indoor environment and the user’s landmark confirmations.
The following pictures provide the error between the user's position and the monitored position. The left picture displays the distance from ground truth for each step, while the right figure shows the distance from the final location for different paths and different directions.
In the image above, we can see how the error grows unbounded in dead reckoning while our method reduces the error every time we have a successful landmark confirmation.
In this image, we compare 6 different methods for direction provision in 5 different paths. The methods are divided into landmark based methods and metric based methods. The landmark based instructions ask from the user to confirm a number of landmarks while in the metric based instructions the user has to walk a number of steps. Each method has 3 levels of granularity: 9 Meters Threshold, 15 Meters Threshold, No Max Threshold. This means that the user is given instructions every 9 meters, 15 meters or when a significant landmark is reached(e.g. hallway intersections, water coolers) respectively. Each method's effectiveness is measured by how close the user reaches to the goal. Experiments have shown that "Landmark 15 meters" method yields the best results.
- Apostolopoulos I, Fallah N, Folmer E, Bekris KE. 2012. Integrated Online Localization and Navigation for People with Visual Impairments using Smart Phones. IEEE International Conference on Robotics and Automation (ICRA).
- Fallah N, Apostolopoulos I, Bekris KE, Folmer E. 2012. The User as a Sensor: Navigating Users with Visual Impairments in Indoor Spaces using Tactile Landmarks. ACM SIGCHI Conference on Human Factors in Computing Systems (CHI).
- Apostolopoulos I, Fallah N, Folmer E, Bekris KE. 2010. Feasibility of Interactive Localization and Navigation of People with Visual Impairments. 11th IEEE Intelligent Autonomous Systems (IAS-10) Conference.
- Apostolopoulos I. 2011. Integrating Minimalistic Localization and Navigation for People with Visual Impairments (MS Thesis).