Skip to main content
European Commission logo
español español
CORDIS - Resultados de investigaciones de la UE
CORDIS
Contenido archivado el 2024-05-29

Investigation of speech perception processes in the context of the development of a stimulation and acquisition system designed for electrophysiological and psychophysical cognitive neuroscience studies

Final Activity Report Summary - ERP TOOL DEVELOPMENT (Investigation of speech perception processes in the context of the development of a stimulation and acquisition system for neuroscience studies)

Deafness and auditory rehabilitation through cochlear implants entail a particular organisation between the auditory and visual modalities involved in speech perception. The implant enables the learning of relationships between auditory and visual information. In order to understand the cooperation between both modalities when the auditory and visual information are redundant or conflicting, we carried out a behavioural study based on the McGurk paradigm in congenitally profoundly deaf children fitted with a cochlear implant.

When confronted with discrepant auditory and visual speech tokens, participants often report hearing a percept that does not correspond to the auditory information but integrates features from the visual input (for example, an auditory /bi/ dubbed onto a visual /gi/ gives rise to the percept /di/). This illusion, first reported by McGurk and MacDonald (1976), and generally referred to as the McGurk effect, indicates that the perceptual system makes use of the visual information even when the auditory signal is clear and unambiguous. The McGurk paradigm provides therefore a way to assess the weight that is assigned to each modality in speech perception.

In the present work, 15 children fitted early (3 years old; mean age 8 ½ years old) and 19 children fitted lately (4 years old; mean age 15 years old) were submitted to four syllables (/bi/, /gi/, /pi/ and /ki/) presented in auditory alone, visual alone, audiovisual congruent and audiovisual incongruent (A/bi/ V/gi/, A/gi/ V/bi/, A/pi/ V/ki/, A/ki/ V/pi/) conditions and were asked to indicate, among several possibilities, the syllable corresponding to what they heard (or lipread for the visual alone condition).

The results indicate that, in the auditory condition, the 'early' group exhibited slightly better identification performances than the 'lately' group (55 % versus 47 % of correct responses). In the visual condition, the reverse pattern was observed (51 % versus 63 % of correct responses). In the congruent audiovisual condition, both groups benefited from the convergence of both modalities since the performances improved by 15 % for the 'early' group and by 29 % for the 'lately' group, relative to the auditory condition. In the incongruent audio-visual condition, the responses were largely dominated by the visual modality in both groups: about 43 % of the responses were merely visual and 35% corresponded to the expected illusion. Whereas control children exhibit a large proportion (around 80 %) of auditory responses, such responses are rarely observed for implanted children (12 % for the 'early' group and 4 % for the 'lately' group).

In summary, the present results suggest that early implanted children make a larger use of the auditory modality in speech perception (that are better at the auditory alone condition and give more 'auditory' responses in the incongruent audio-visual condition). However, the use of lipreading seems more optimal in lately implanted children (they are better at the visual alone condition and, more importantly, the benefit more from lipreading in the congruent audio-visual condition, the situation that is the closest to real life).