A Context-Aware-Based Audio Guidance System for Blind People Using a Multimodal Profile Model
- Authors
- Lin, Qing; Han, Youngjoon
- Issue Date
- Oct-2014
- Publisher
- MDPI AG
- Keywords
- electronic mobility aids; sensor fusion; object detection; Bayesian network; context-aware guidance; multimodal information transformation
- Citation
- SENSORS, v.14, no.10, pp.18670 - 18700
- Journal Title
- SENSORS
- Volume
- 14
- Number
- 10
- Start Page
- 18670
- End Page
- 18700
- URI
- http://scholarworks.bwise.kr/ssu/handle/2018.sw.ssu/9931
- DOI
- 10.3390/s141018670
- ISSN
- 1424-8220
- Abstract
- A wearable guidance system is designed to provide context-dependent guidance messages to blind people while they traverse local pathways. The system is composed of three parts: moving scene analysis, walking context estimation and audio message delivery. The combination of a downward-pointing laser scanner and a camera is used to solve the challenging problem of moving scene analysis. By integrating laser data profiles and image edge profiles, a multimodal profile model is constructed to estimate jointly the ground plane, object locations and object types, by using a Bayesian network. The outputs of the moving scene analysis are further employed to estimate the walking context, which is defined as a fuzzy safety level that is inferred through a fuzzy logic model. Depending on the estimated walking context, the audio messages that best suit the current context are delivered to the user in a flexible manner. The proposed system is tested under various local pathway scenes, and the results confirm its efficiency in assisting blind people to attain autonomous mobility.
- Files in This Item
-
Go to Link
- Appears in
Collections - College of Information Technology > Department of Smart Systems Software > 1. Journal Articles
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.