CORDIS - Risultati della ricerca dell’UE
CORDIS

Mental representations of and adaptation to the speech signal transmitted via cochlear implants: How the impoverished signal finds its way to the mental lexicon

Final Report Summary - MARCI (Mental representations of and adaptation to the speech signal transmitted via cochlear implants: How the impoverished signal finds its way to the mental lexicon)

This project investigated the functioning of mechanisms underlying speech perception in cochlear implant (CI) users, within the framework of psycholinguistic models of speech perception that had been developed for normal hearing (NH) listeners. CIs are auditory prostheses that enable post-lingually deafened individuals to hear and understand speech again. These individuals, however, need to adapt to new sensory information because electric hearing differs from acoustic hearing, missing many cues in speech signal transmitted through direct stimulation of the auditory nerve. Although speech understanding is successfully restored for most CI users, the processing of speech, due to these inherent degradations and resulting ambiguities, becomes an effortful task. In our project we applied objective measures in specific paradigms that capture the course of speech processing in real time, and combined these with measures of effort, i.e. increase in attentional processing in speech perception. Our eye-tracking studies are, to our knowledge, the first ones to combine the measure of time-course of lexical access with pupillometry as a measure of effort. We mapped the time course of solving ambiguities in the speech signal on the level of words and sentences, and identified the processing stages, which involve additional attentional resources, using acoustic CI simulations in NH listeners. Further we identified which mechanisms of speech perception change as a function of processing degraded signals, and observed how the processing of such signals adapts through long-term exposure in experienced CI users. The results did not always come out the way expected, indicating a complex relationship between signal degradations and the speech perception mechanisms.

In the current state of the art in research on CIs there is a call for a better understanding of how the working of the prosthesis affects listeners’ perceptual mechanism. The goal here is to adapt the devices and hearing rehabilitation to the needs of the brain that underwent reorganization due to sensory deprivation during periods of deafness, and which after implantation needs to re-adapt to the processing of degraded speech signals. The great deal of individual variation in the success of this adaptation has partly been explained, in terms of etiological and surgical consequences of deafness and implantation, and this likely also relies on individual cognitive capacities. The understanding of how single stages of speech processing change as a result of different sensory stimulation, however, received until now not enough investigation, likely due to the need of knowledge needed from different fields to be able to conduct such complicated research. This project is filling in this gap.

Our findings show that signals acoustically simulating speech transmitted by a CI put not only a strain on listening to speech, but also obscure the accessibility of acoustic information that is present in the signal, and delay lexical access as well as the ability to integrate semantic information within a sentence. Through adaptation experienced CI listeners reweight their sensitivity to acoustic cues that are reliably transmitted in the otherwise impoverished signal, but their lexical processing is still slower and prolonged relative to NH listeners. One of the main conclusions in our project is that the processing of degraded speech shows a different pattern of increase and decrease in use of attentional resources. NH listeners appear to bind acoustic features within the time constrains of sensory auditory memory, a process that involves fast selective attention, and which grants them instant lexical access after which their attentional resources are set free for the processing of subsequent events. No release of attentional resources is found when processing degraded speech, which has consequences on the demands set on listeners' working memory and their ability for predictive processing of sentences.

Our results have implications for our understanding of active though unconscious perceptual processes involved in speech comprehension. Our conclusions fill in previously understudied questions in research on CIs, and have implications for further research and development of test protocols capturing listeners’ success in hearing rehabilitation, including device development, causing possibly a shift of focus from judging mere intelligibility of speech to comprehension to towards objective measures of effort involved in processing speech. This work is also important to general audience, and most importantly to CI users, raising their awareness that speech comprehension appears to be an effortless task because long-term exposure to speech signals consequently adjusts the regulation of attentional resources involved while this is not so for new CI users, and perhaps for vulnerable populations, such as children and elderly.