

Blind people are generally more sensitive to echoes than sighted people, and often have improved echolocation abilities. The generally better performance of blind than of sighted participants is consistent with the perceptual enhancement hypothesis that individuals with severe visual deficits develop improved auditory abilities to compensate for visual loss, here shown by faster, more fluid, and more accurate navigation around obstacles using sound.įor those who have lost their sight, information regarding the position of silent obstacles might be obtained using echoes from self-generated sounds, or using electronic sensory substitution device (SSD) travel aids, which provide information about the surrounding space by means of an intact modality, such as audition or touch. The blind expert echolocator displayed performance similar to or better than that for the other groups using audition, but was comparable to that for the other groups using the SSD. Using audition, blind non-echolocators navigated more effectively than blindfolded sighted individuals with fewer collisions, lower movement times, fewer velocity corrections and greater obstacle detection ranges. Although visual guidance was the most effective, participants successfully circumvented the obstacle in the majority of trials under auditory or SSD guidance. Ten normally sighted participants, 8 blind non-echolocators, and 1 blind expert echolocator navigated around a 0.6 x 2 m obstacle that was varied in position across trials, at the midline of the participant or 25 cm to the right or left. A Vicon motion capture system was used to measure human movement kinematics objectively. Finally, our findings suggested that the total absence of vision might impair the development of an egocentric perspective in case of body midline-crossing targets.Performance for an obstacle circumvention task was assessed under conditions of visual, auditory only (using echolocation) and tactile (using a sensory substitution device, SSD) guidance. Moreover, results on visually impaired children indicated that blindness did not impair allocentric spatial coding in the haptic domain but rather affected the ability to rely on haptic egocentric cues in the switching-perspective task. Results indicated that temporary visual deprivation impaired the ability of blindfolded sighted children to switch from egocentric to allocentric perspective more in the haptic domain than in the visual domain.

Children with and without visual impairments from 6 to 13 years of age were asked to visually (only sighted children) or haptically (blindfolded sighted children and blind children) explore and reproduce a spatial configuration of coins by assuming either an egocentric perspective or an allocentric perspective. In the current work, we investigated whether visual deprivation also impairs the ability to shift from egocentric to allocentric frames of reference in a switching-perspective task performed in the visual and haptic domains. Previous works have shown that children’s ability to adopt an allocentric frame of reference might be impaired by the absence of visual experience during growth. Vision and touch play a critical role in spatial development, facilitating the acquisition of allocentric and egocentric frames of reference, respectively.
