Sv-lncs
Investigating context-aware clues to assist navigation
for visually impaired people
Nicholas A. Bradley, Mark D. Dunlop
Department of Computer and Information Sciences
University of Strathclyde, Glasgow, Scotland, G1 1XH, UK
Tel: +44 (0) 141 552 4400, Fax: +44 (0) 141 552 5330
E-mail: {Nick.Bradley, Mark.Dunlop}@cis.strath.ac.uk
Abstract. It is estimated that 7.4 million people in Europe are visually
impaired [1]. Limitations of traditional mobility aids (i.e. white canes and
guide dogs) coupled with a proliferation of context-aware technologies (e.g.
Electronic Travel Aids, Global Positioning Systems and Geographical
Information Systems), have stimulated research and development into
navigational systems for the visually impaired. However, current research
appears very technology focused, which has led to an insufficient appreciation
of Human Computer Interaction, in particular task/requirements analysis and
notions of contextual interactions. The study reported here involved a small-
scale investigation into how visually impaired people interact with their
environmental context during micro-navigation (through immediate
environment) and/or macro-navigation (through distant environment) on foot.
The purpose was to demonstrate the heterogeneous nature of visually impaired
people in interaction with their environmental context. Results from a previous
study involving sighted participants were used for comparison. Results
revealed that when describing a route, visually impaired people vary in their use
of different types of navigation clues – both as a group, when compared with
sighted participants, and as individuals. Usability implications and areas for
further work are identified and discussed.
1. Introduction
It is estimated that 7.4 million people in Europe are visually impaired [1]. For many, known destinations along familiar routes can be reached with the aid of white canes or guide dogs. By contrast, for new or unknown destinations along unfamiliar routes (that may change dynamically) the limitations of these aids become apparent [2, 3, 4] (e.g. white canes are ineffective for detecting obstacles beyond 3-6 feet). Further, Petrie [5] describes how these mobility aids are only useful for assisting visually impaired people through the immediate environment (termed as micro-navigation), but do not facilitate the traveller in more distant environments (termed as macro-navigation).
With the proliferation of context-aware research and development, Electronic
Travel Aids (ETAs) such as obstacle avoidance systems (e.g. Laser Cane and ultrasonic obstacle avoiders [6]) have been developed to assist visually impaired travellers for micro-navigation. Whereas, Global Positioning Systems (GPS) and Geographical Information Systems (GIS) have been/are being developed for macro-navigation (e.g. MOBIC Travel Aid [4], Arkenstone system [7] & Personal Guidance System [8]). Golledge
et al. [8] describe how these technologies have the potential to ‘enrich the visually impaired traveller's knowledge of the environment or to give them the knowledge capability typically obtained by sighted travellers using a map or glancing around'.
However, despite recent technological advancements, there is still considerable
scope for Human Computer Interaction (HCI) research. Previous work has predominantly focused on developing technologies and testing their functionality [e.g. 3, 9, 10] as opposed to utilizing HCI principles (e.g. Task Analysis) to actively assess the impact on the user. For instance, Dodson
et al. [2] make the assumption that ‘since a blind human is the intended navigator a
speech user-interface is used to implement this'. In contrast, Franklin [11] illustrates the difficulties of interpreting spatial relations from common speech (natural language), and Strothotte
et al. [4] stipulate that many visually impaired people express concerns about using head-phones to produce speech output, as vital environmental sounds may be blocked out.
In order to capture human issues associated to usability, Zetie [12] illustrates how
there is a need to understand the notion of
contextual interactions. Dey & Abowd [13] state that context can ‘increase the richness of communication in human-computer interaction making it possible to produce more useful computational services'. However, despite the contextual complexity of a visually impaired traveller interact-ing with various mobility aids (i.e. navigational system and guide dog/white cane), existing research has failed to fully address the
interaction of contextual components and how usability is influenced. Further, as more contextual sources are used to identify and discover a user's context, it is becoming increasingly paramount that information is managed appropriately and displayed in a way that is tailored to the visually impaired traveller's task, situation and environment. For instance, Sabelman
et al. [14] describe how using other senses, like the smell of a bookstore or restaurant, would be beneficial for orientating in a new place. Further, Golledge
et al. [8] discuss how existing travel databases do not provide information which would be of use to visually impaired people (such as road widths, differences of road textures, etc). Several levels of detail should also be available in a realistic range of situations [4, 8].
This study involved a small-scale investigation into how visually impaired people
interact with their environmental context during micro- and/or macro-based navigation on foot. Our previous study primarily focussed on sighted participants' environmental interactions [15] and one of the intentions here was to undertake a comparison study. The main purpose of this study was to demonstrate through a series of interviews the heterogeneous nature of visually impaired people in interaction with their environmental context. The study hypothesis is that
visually impaired people will vary individually and collectively (in comparison to sighted participants) in their use of environmental context during micro- and/or macro-based navigation. It is anticipated that the results will (i) facilitate the research and development of context-aware navigational systems for the visually impaired, and (ii) promote the value of actively involving principles of HCI and context throughout all stages of development.
2. Methodology
In order to facilitate comparisons of data between our previous study [15] and this study, the structure of the interview remained the same. However, some questions were tailored for the requirements of visually impaired people. This study also involved a smaller sample of participants, as the intention was more speculative (i.e. raising issues for consideration) rather than formulating conclusive usability design recommendations. Six participants (3 males and 3 females) between the ages of 36 to 65 were located and interviewed via the Glasgow & West of Scotland Society for the Blind, 2 Queens Crescent, Glasgow, UK. All participants were resident in Greater Glasgow and their professions ranged from a BBC reporter to a retired minister of religion. Participants vision ranged from only light perception to totally blind. Four have been visually impaired since birth and two have been blind for 16 and 32 years respectively.
The interview study was recorded in full and comprised of three parts.
1. Pre-interview questionnaire: Information on participants' personal details,
familiarity with Glasgow centre and knowledge of context-aware computing.
2. Interview:
The main interview consisted of two destinations that participants had
to verbally describe (as if they were also speaking to another visually impaired
person) how to reach on foot. Participants were asked to select well-known destinations (approx. 10 minutes walk) and a suitable starting point(s).
3. Post-interview questionnaire: Information on participants' opinions on the
importance of different types of contextual information for route navigation, design issues relating to usability and their mobile needs/requirements.
Similar to previous analysis techniques [15] (which were in line with methods used
for verbal protocol analysis [16]), participants' descriptions from part 2 involved a subjective categorization of different types of contextual information into nine
categories:
directional (e.g. left/right, north/south),
structural (e.g. road, monument, church),
textual-structural based (e.g. Border's bookshop, Greave Sports),
textual-area/street based (e.g. Sauchiehall St., George Sq.),
environmental (e.g. hill, river,
tree),
numerical (e.g. first, second, 100m),
descriptive (e.g. steep, tall),
temporal/distance based (e.g. walk until you reach… or just before you get to…),
sensory (olfaction/hearing/touch) (e.g. sound of go-kart engines while passing
ScotKart Centre or smelling hops near a brewery). However, for this study a further two categories we
motion (e.g. cars passing, doors opening), and
social contact (e.g. asking people or using a guide dog for help). Accumulated scores were
calculated each time a participant mentioned a word/phrase relating to one of the listed contextual categories.
3. Results
The results to part 2 are illustrated in figures 1 & 2. Six out of eight participants aged between 36-65 were randomly selected from the previous study [15] to form the sighted participants' results.
Fig. 1. The average number of utterances used within in each contextual category between
sighted and visually impaired participants.
1 Sighted participants' recordings were re-assessed to identify whether words/phrases were
mentioned relating to those additional categories.
Fig. 2. The average number of contextual categories used per participant within sighted and
visually impaired groups.
The key findings from Figures 1 & 2 are as follows:
• Visually impaired participants on average used over 3 times more directional
information, over 7 times more structural & environmental information, 6 times more numerical information (with additional types, such as using degrees for
heading direction), almost 9 times more descriptive information and over 2 times more temporal/distance based information than sighted participants.
• No words/phrases relating to the sensory, motion or social contact contextual
categories were used by sighted participants.
• Sighted participants on average used over double the amount of textual-structural
information and almost half more textual-area/street based information.
• Visually impaired participants mentioned words/phrases within a greater number
of contextual categories on average (9.75) than sighted participants (6.33).
Part 3 of the interview revealed the following issues:
• Many expressed limitations of guide dogs and white canes. One participant found
using a guide dog difficult within busy environments. Further, guide dogs become tired and are also less effective when navigating to unfamiliar destinations. White
canes can become tiring to use (due to its repetitive nature) and also require specialist training from mobility or rehabilitation officers.
• All participants regarded sensory information as paramount for navigation, though
many stated that each type (i.e. hearing, smell, touch) was additional confirmation for orientation/navigation and so relying solely on one type would be impossible. Audio clues included the (i) sound of hospital machinery, (ii) squeaking of doors
opening, (iii) sound of escalators and ATMs, and (iv) sound of wind exiting a tunnel. Olfaction clues in the environment include the smell of bakeries, pet shops, chemists, newsagents, chip shops, etc. Lastly, the sense of touch is used to sense
sun location for orientation, the difference in ground textures (e.g. concrete paving and metal drainage grill), the edge of buildings, etc.
• Other types of information desired included (i) the width of roads, (ii) whether the
edge of the pavement was a down or up curb, and (iii) the number of crossings before a left/right turn.
Table one reveals participants' opinions on the most appropriate method of presenting contextual information for their needs.
Table 1: Participants' opinions on how contextual information should be presented.
Methods for presenting information
% Participants
Non-speech output, speech output and vibration alerts
Non-speech and speech output
As shown in table one, the most popular method of presenting contextual information is by using a combination of non-speech and speech output with vibration alerts.
However, 50% (3) of the participants thought using earphones may mask/distort important environmental clues used for navigation/avoiding hazards. Additional comments provided are as follows:
• The most prevalent problem experienced is the unexpected/non-fixed/temporary
features in the environment (e.g. temporary road signs, road sweepers, people, lampposts, overhanging branches/baskets, excavation work, etc.). These are more difficult to detect and provide the greatest hazards for journeys on foot.
• Navigational context-aware applications should have level meters (e.g. beginner/
intermediate/advanced) for providing navigational instructions. It is described how visually impaired people may use routes frequently thereby requiring less
information for future trips. This would minimise feelings of obtrusiveness.
4. Discussion & conclusions
The small sample of participants precludes drawing any conclusions or generalisations from the study. However, the results do reveal many usability issues that need resolving before navigational systems seamlessly enrich the visually impaired traveller's knowledge of the environment
similar to that of sighted people.
The results do support the original hypothesis that visually impaired participants
will vary individually and collectively in their use of environmental context during micro- and/or macro-based navigation. Each participant's contextual descriptions were unique, which indicates support for allowing the user to customise information for his/her own needs. Although explanations for this trend were outwith the scope of this study, differing types of visual impairments (resulting in different contextual interactions) and length of time impaired/blind, may be causal factors. For instance, someone blind since birth may rely more on olfaction and hearing environmental information than someone who has restricted peripheral vision as a result of glaucoma. Further investigation relating to those HCI/usability issues is required.
There were also major differences between visually impaired and sighted
participants. The greater use of information relating to the directional, structural, environmental, numerical and descriptive contextual categories suggests that visually impaired people require more detailed information for micro-navigation (as a result of no/limited visual information). This also explains why visually impaired participants used information within additional categories relating to sensory, motion and social contact (making them more contextually dependent). It is worth observing that most current navigation systems, which are designed for sighted users, are based heavily around giving directional, numerical and textual information and give very little (if any) structural or descriptive information. Furthermore, based upon the results, it appears that sighted participants rely more on using macro-navigational cues, such as using more distant landmarks such as names of buildings and streets (textual-structural and textual-street/area based information).
The individual and collective differences as reported, strongly support more
research into understanding contextual interactions. HCI methods/models/frameworks need to be utilised to identify which contextual interactions are
relevant and how
temporal changes can influence usability. For instance, in line with Strothotte [4], one participant described how the user should be able to control the level of contextual detail (user customisation) in order to account for users memorising frequently visited routes (temporal changes). Further, there were preference differences in output presentation styles (i.e. non-speech output, vibration alerts, etc). Lastly, although many navigational systems developed explain how traditional mobility aids are still required for micro-navigation [9], there is little or no consideration of how these mobility aids are to be integrated into a compatible unit.
In order to realize the potential of context-aware navigational systems for the
visually impaired, more HCI work is required to understand the unique requirements
and contextual interactions of different types of visual impairments. The pace at which technology progresses needs to matched by a suitable analysis of human factors. The next stage of our work involves designing a multi-category mobile navigation tool for controlled experiments involving visually impaired people, while developing a model of contextual interactions encompassing a multidisciplinary appreciation.
References
1. European Blind Union. (2002). Statistical Data on blind and partially sighted people in
European countries. http://www.euroblind.org/fichiersGB/STAT.htm
2. Dodson, A.H.; Moore, T. & Moon, G.V. (1999). A Navigation System for the Blind
Pedestrian, Proceedings of GNSS 99, 3rd European Symposium on Global Navigation Satellite Systems, p 513-518, Genoa, Italy, October 1999.
3. Shoval, S.; Ulrich, I. & Borenstein, J. (2000). Computerized Obstacle Avoidance Systems
for the Blind and Visually Impaired. Invited chapter in "Intelligent Systems and Technologies in Rehabilitation Engineering." Editors: Teodprescu, H.N.L. & Jain, L.C., CRC Press, ISBN/ISSN: 0849301408, p. 414-448.
4. Strothotte, T.; Fritz, S.; Michel, R.; Raab, A.; Petrie, H.; Johnson, V.; Reichert, L. & Schalt,
A. (1996). Development of Dialogue Systems for the Mobility Aid for Blind People: Initial Design and Usability Testing. Proceedings of ASSETS '96, Vancouver, British Columbia, Canada, p. 139-144.
5. Petrie, H. (1995). User requirements for a GPS-based travel aid for blind people. In J.M.
Gill and H. Petrie (Eds.), Proceedings of the Conference on Orientation and Navigation Systems for Blind Persons, Hatfield, UK. 1-2 February. London: Royal National Institute for the Blind.
6. Bradyn, J.A. (1985). A review of mobility aids and means of assessment. In D.H. Warren
& E.R. Strelow (Eds.), Electronic spatial sensing for the blind. Boston: Martinus Nijhoff. p.13-27.
7. Fruchterman, J. (1995). Archenstone's orientation tools: Atlas Speaks and Strider. In J.M.
Gill and H. Petrie (Eds.), Proceedings of the Conference on Orientation and Navigation Systems for Blind Persons, Hatfield, UK. 1-2 February. London: Royal National Institute for the Blind.
8. Golledge, R.G.; Klatzky, R.L.; Loomis, J.M.;Speigle, J. & Tietz, J. (1998). A geographical
information system for a GPS based personal guidance system. International Journal of Geographical Information Science, Vol. 12, No. 7, 727-749.
9. Loomis, J.M.; Golledge, R.G. & Klatzky, R.L. (1998). Navigation System for the Blind:
Auditory Display Modes and Guidance. Journal of
Presence, Vol. 7, No. 2, April 1998.
10. Helal, A.S.; Moore, S.E. & Ramachandran, B. (2001). Drishti: An integrated Navigation
System for Visually Impaired and Disabled. Proceedings of the 5th International Symposium on Wearable Computer, October 2001, Zurich, Switzerland.
11. Franklin, N. (1995). Language as a means of constructing and conveying cognitive maps.
In
The construction of cognitive maps, edited by J. Portugali (Dordrecht: Kluwer Academic Publishers), p. 275-295.
12. Zetie, C. (2002). Unwired Express website: Market Overview - The Emerging Context-
Aware Software Market. http://www.unwiredexpress.com
13. Dey, A.K. & Abowd, G.D. (2000). Towards a Better Understanding of Context and
Context-Awareness.
Proc CHI 2000 Workshop on The What, Who, Where, When, and How of Context-Awareness. The Hague, Netherlands, April 2000.
14. Sabelman, E.E.; Burgar, C.G.; Curtis, G.E.; Goodrich, G.; Jaffe, D.L.; Mckinley, J.L.; Van
Der Loos, M. & Apple, L.G. (1994). Personal navigation and wayfinding for individuals with a range of disabilities. Project report: Device development and evaluation. http://guide.stanford.edu/Publications/dev3.html
15. Bradley, N.A & Dunlop, M.D. (2002). Understanding contextual interactions to design
navigational context-aware applications. Proceedings
of Mobile HCI 02, Pisa, September 2002
in press.
16. Bainbridge, L. (1991). Verbal Protocol Analysis. In Wilson, J.R. & Corlett, E.N. (Eds.),
Evaluation of Human Work: A practical Ergonomics Methodology. London: Taylor and Francis, 161-179.
Source: https://personal.cis.strath.ac.uk/mark.dunlop/research/publications/02bradleydunlopb.pdf
JOURNAL OF AEROSOL MEDICINE AND PULMONARY DRUG DELIVERY Original Research Volume 23, Number 5, 2010ª Mary Ann Liebert, Inc.Pp. 1–6DOI: 10.1089=jamp.2009.0785 Predictors of Incorrect Inhalation Technique in Patients with Asthma or COPD: A Study Using a Validated Videotaped Scoring Method Geert N. Rootmensen, M.D.,1 Anton R.J. van Keimpema, M.D., Ph.D.,1,2 Henk M. Jansen, M.D., Ph.D.,1 and Rob J. de Haan, Ph.D., R.N.3
Government of Fiji Request for Tender Supply of Water Level and Rainfall Station Equipment for the Fiji Meteorological Service Dated: December 19, 2013 Change History Date Changed Description of the change December 19th 2013 Original draft – FMS QA STO(Hydrology) PS, Ministry of Transport FPO Advertisement