Prof. Amir Amedi of the Hebrew University of Jerusalem illustrates to our readers some devices recently developed to help the blind and visually impaired overcome their handicap by using the sensory modalities that remain intact. In particular, systems using the auditory pathway are described, which have undergone significant development in recent years, aided by advances in technology and cognitive neuroscience
Sensory Replacement Devices
It is commonly believed that the visual cerebral cortex, deprived of visual stimuli in early childhood, will never be able to adequately develop its functional specialisation, making the recovery of vision in later life almost impossible.
Scientists from the Hebrew University in Jerusalem and the Institute of Vision in Paris have, on the other hand, succeeded in demonstrating that blind individuals can actually 'see', describe objects and even identify letters and words, using the Sensory Substituion Devices.
Among these devices, called Video-auditory Sensory Replacement (SSD), the most recent The vOICedeveloped in 1992 by engineer Peter Meijer, from the Philips research laboratory in Eindhoven in the Netherlands, works by means of software that converts the video-image captured by a video camera into sound(4).
The sound representation of the image obeys precise rules. The scene is explored from left to right, the pitch of the sound translates the high position of the signal in the image, its intensity reflects the brightness of the object. Other image attributes such as colour and distance are also processed by the system in its current state of development.
The perception of the image thus represented is instantaneous. This type of sound message makes it possible to process a mental representation of the shape, structure and location of objects, which are nearby, and to interact with them.
DSS does not strictly speaking restore the lost sensory modality, but provides the means to develop and utilise a new integrative function.
The structures that ensure this new function of integrating sound into a mental image are, at least partially, located at the occipital level, i.e. in the tissue normally dedicated to visual perception.
Intermodal plasticity
In the healthy subject, the occipital cortex essentially recognises visual afferent pathways. However, through integration processes, stimuli connected to other sensory modalities, such as auditory and tactile, can interact there.
In cases of visual impairment, the occipital cortex increases its involvement in processing the information provided by the other senses. There is reason to believe that functional reorganisation occurs in two stages. The first presents an activation or strengthening of pre-existing nerve connections. If the clinical situation persists, a second phase begins, with the development of new nerve structures.
This process of intermodal plasticity explains the occipital implantation of mechanisms that ensure the integration of sound stimuli produced by an SSD within the specialised structures of the visual cortex.
An example of this is the involvement of the fusiform gyrus, which, in the latero-occipital complex, is essentially dedicated to the visual recognition of faces. A study performed with functional MRI revealed that this complex is activated when the blind subject tries to recognise a face in the context of a sound representation produced by the SSD. In contrast, the fusiform gyrus is not stimulated by a sound signal that identifies the individual in a non-spatial way, such as the voice(1).
Reading with sounds: SSD and VWFA
The team of researchers, led by Prof. Amir Amedi and Ella Striem-Amit, showed in a study published in November 2012 that with the help of Sensory Substitution Devices (SSD) congenitally blind individuals can learn to read and recognise complex images.
In order to be able to use a video-auditory SSD, whether in a hospital or everyday environment, it is sufficient to wear a mini video camera connected to a small computer (or smart phone) and stereo headphones.
The images are converted into 'soundscapes' (sounds that 'translate' the images topographically) through an algorithm that allows the user to hear and then interpret the visual information coming from the camera. Blind subjects using this device achieve a level of visual acuity that technically exceeds the WTO's (World Health Organisation) universal criterion for blindness, as published in a previous study by the same group.
The resulting vision, although unconventional in that it does not involve the activation of the body's ocular apparatus, is nevertheless 'visual' in the sense that it actually activates the brain's visual identification network.
The study shows that by following a single, specific but relatively short (70 hours) training protocol developed by the Amedi lab, blind individuals could easily use SSDs to classify images into categories of objects, such as images of faces, houses, body shapes, everyday objects and backgrounds.
They can also identify more complex everyday objects, locate the position of people in space, identify facial expressions and even read letters and words (for demos, movies and more information see http://brain.huji.ac.il/).
These unprecedented results are reported in thearticle published in the November issue of the prestigious neuroscience journal Neuron.
The Hebrew University study went even further, testing what happens in the brain when blind individuals learn to see with sounds. Specifically, the research team tested the ability of this high acuity vision to activate the 'dormant' visual cortex of blind individuals, even if they were taught to process visual images through sounds only in adulthood.
Prof. Amedi and Ella Striem-Amit used Functional Magnetic Resonance Imaging (fMRI) to measure the neuronal activity of individuals blind from birth while 'seeing', using SSD, high-resolution images of letters, faces, houses, everyday objects and body shapes. Surprisingly, not only was their visual cortex activated by sounds, but the brain displayed the category selectivity that characterises the brain that normally develops in normally sighted individuals.
A specific portion of the brain, known as the VWFA (Visual Word Form Area) - first identified in the visually impaired by Professors Laurent Cohen and Stanislas Dehaene of the Hospital Pitie-Salpétriere INSERM-CEA (France) - is normally very selective.
In the visually impaired, the VWFA plays a role in reading and is activated in seeing and reading letters more than for any other visual category of objects. Surprisingly, the same thing was detected in this area in individuals without sight. Their VWFA, after only 10 hours of training in using the SSD, showed greater activation for letters than for the other visual categories tested.
In fact, the VWFA was so plastic in changing, that in one of the study participants, increased activation for the SSD letters was demonstrated after less than two hours of training.
'The adult brain is more flexible than we thought,' says Prof. Amedi. In fact, this and other research carried out by various groups has shown that multiple brain areas are not specific in terms of their sensory input (sight, hearing or touch), but rather in terms of their function, or the type of processing they perform, which can be done in various ways (This information was summarised in a recent review by Prof. Amedi's research group, published in the journal Current Directions in Neurology).
It seems, therefore, that in blind individuals, brain areas could potentially be 'reawakened' with regard to properties and functions necessary for visual perception even after years or perhaps even in the case of blindness from birth, if the appropriate technologies and training are used, as Prof. Amedi stated.
These findings provide hope that an input reintroduced into the visual centres of a blind person's brain can potentially restore vision, and that SSDs can be useful for visual rehabilitation.
As Prof. Amedi tells us: "SSDs could help congenitally blind individuals or those with visual impairments learn to process complex images, as was done in this study, or they could be used as sensory interpreters that provide high-resolution, synchronous and supportive input to a visual signal coming from an external device such as a bionic eye.
Amir Amedi
Institute for Medical Research Israel-
Canada (IMRIC) and The Edmond and
Lily Safra Center for Brain Sciences (ELSC)
Hebrew University of Jerusalem, Israel
Bibliography
1) Amedi A et al. (2007), Nat Neurosci 10, 687-9
2) Auvray M et al. (2007), Perception 36, 416-30.
3) Reich L et al. (2012), Curr Opin Neurol 25, 86-95
4) Meijer PBL (1992), IEEE Trans Biomed Eng 39, 112-21
See the video (in English)
Dr. Carmelo Chines
Direttore responsabile