Final Report Summary - SEEING WITH SOUNDS (Neural and behavioural correlates of 'seeing' without visual input using auditory-to-visual sensory substitution in blind and sighted: a combined fMRI-TMS st
In the visual system, two parallel processing streams exist: the 'dorsal pathway' appears to be primarily concerned with the analysis of the spatial aspects of visual scenes and visually guided hand movements, while the 'ventral stream' is focused with processing information related to the identification of objects and faces (Ungerleider and Mishkin, 1982). Using the vOICe visual-to-auditory sensory substitution device (SSD), which transform visual images taken by a webcam into sounds preserving this visual information we aimed to: (1) understand more fully the neural basis of visual-to-auditory transformation in sighted and blind individuals for object recognition and localisation; (2) improve sight restoration efforts using SSDs and the vOICe in specific by: (a) making it more useful as a stand-alone SSD for blind individuals; and (b) optimising its utility for neuro-ophthalmology rehabilitation in sight restoration situations. This includes developing better training schemes for using the vOICe to achieve better performance for 'visual' functions and improvement of the device based on input from our research. During the project, we have devised a training programme which teaches congenitally blind individuals (and sighted controls) to view and interpret pictures and streaming visual information transformed to sounds via the vOICe SSD which was developed by Peter Meijer. Participants are trained in a standardised manner to recognise shapes of items of growing visual complexity, from simple lines to line drawings and to real-life objects, people and environments and their locations in space. For this training programme, we also developed tactile feedback of static images and a verbal explanation of online visual experiences to the blind participants, who are gradually taught the basic visual principles of two- and three-dimentional visual percepts. Furthermore, special attention is given to orientation and navigation in complex environments and in improving 'eye'-hand coordination using a mobile version of the SSD. All of our blind individuals achieved impressive results and were able to recognise people (for example, see a video of a blind person identifying facial expressions), objects and their locations even within complex real life scenarios. We have measured the visual acuity to verify the level of detail perceivable using the SSD by the blind, and found that not only could the blind perceive a higher acuity than possible using contemporary most advanced retinal prostheses, but that most blind participants could pass the World Health Organisation (WHO) blindness threshold (Striem-Amit et al., in press). In addition, we have started recently to develop new efficient SSD-training methods which can be distributed online to the blind community. Therefore, we find that following supervised training the vOICe and other novel SSDs may be useful as a stand-alone SSD for blind individuals, providing high-detail functional vision at a very low cost and in a non-invasive manner, which may be beneficial for millions of blind individuals worldwide, including their majority living in developing countries (WHO, 2011) for whom the factor of cost is especially crucial. The careful construction of this training paradigm also provided us with the ability to study the neural networks involved in audio-visual transformations in the sighted and blind brains, and the plasticity generated by exercising such transformations, using neuroimaging before, during and in the end of the training. We have shown that both sighted and blind individuals appear to utilise their highly specialised visual system architecture (as well as other parts of the brain involved in the relevant neural networks; for example, for shape processing see Striem-Amit et al., 2011b) for analysing visual-to-auditory transformation. We observe a clear differentiation between ventral stream preference for object recognition and dorsal stream preference for localisation in both groups (Striem-Amit et al., 2011c). The blind show additional recruitment of the visual cortex, and activated even early visual ventral cortex for the processing of shape using sounds, suggesting even cross-modal plasticity is organised according the visual functional task-selectivity in the blind. Furthermore, using another, older form of sensory substitution, Braille reading (which converts letter symbols into touch information rather than audition), we have shown that not only is the large-scale functional streams division of labour retained in the congenitally blind, but that a highly-specific reading area in the visual ventral stream, the visual word-form area (VWFA) is activated in the blind when they read using touch (Reich et al., 2011) and without any visual experience, just like the segragetion to two streams we found. These findings have far reaching implications in term of the percept of the brain as organised according to input sensory-modalities, suggesting instead that its organisation is based on sensory-modality invariant task-selectivity. The ability to recruit the occipital cortex to visual perception in adult blind individuals also has far-reaching academic and clinical implications in sight restoration. Sight restoration following years of blindness, and particularly after early-onset blindness may only be possible by teaching the blind brain to process the novel percepts, as the visual system development may critically depend upon visual input for normal functional development. Demonstrating that visual training can achieve visual cortex recruitment for visual processing may greatly promote sight restoration. Sight restoration can therefore, be done both via SSDs used as stand-alone sensory aids, and by training people undergoing artificial retina transplant or visual neuroprostheses in the future. Specifically, we suggest the use of an optimised combination of SSD and retinal prosthesis (Reich et al., 2012), which can be used as a 'sensory interpreter' post sight-restoration to help teach the cortex how to see after prolonged blindness or for visual perception augmentation. More information on the project can be found in the lab website under http://brain.huji.ac.il/press References: (1) WHO (2011). Fact Sheet No 282. (2) Reich, L., Maidenbaum, S., and Amedi, A. (2012). The brain as a flexible task-machine: implications for visual rehabilitation using non-invasive vs. invasive approaches. Current Opinion in Neurology 25, 86-95. (3) Reich, L., Szwed, M., Cohen, L., and Amedi, A. (2011). A ventral visual stream reading center independent of visual experience. Curr Biol 21, 363-368. (4) Striem-Amit, E., Bubic, A., and Amedi, A. (2011a). Neurophysiological mechanisms underlying plastic changes and rehabilitation following sensory loss in blindness and deafness. In Frontiers in the Neural Bases of Multisensory Processes, M.M. Murray , and M.T. Wallace, eds. (Oxford, UK: Taylor and Francis). (5) Striem-Amit, E., Dakwar, O., Hertz, U., Meijer, P., Stern, W., Merabet, L., Pascual-Leone, A., and Amedi, A. (2011b). The neural network of sensory-substitution object shape recognition. Functional Neurology, Rehabilitation, and Ergonomics 1, 271-278. (6) Striem-Amit, E., Dakwar, O., Reich, L., and Amedi, A. (2011c). The large-scale organization of 'visual' streams emerges without visual experience Cereb Cortex. (7) Striem-Amit, E., Guendelman, M., and Amedi, A. (in press). 'Visual' acuity of the congenitally blind using visual-to-auditory sensory substitution. PLoS ONE. (8) Ungerleider, L.G. and Mishkin, M. (1982). Two cortical visual systems. In Analysis of Visual Behavior, D.J. Ingle, M.A. Goodale, and R.J.W. Mansfield, eds. (Boston: MIT Press), pp. 549-586.