Skip to main content

Speech Perception and Language Acquisition in Hearing-Impaired Children

Final Report Summary - HEARING LANGUAGE (Speech perception and language acquisition in hearing-impaired children)

This project investigated how adults and children with hearing impairment processed and acquired the sound structure of their native language, and how this differed with respect to the normal-hearing population.

In the first part of this project, we examined how children dealt with word form variation in connected speech. Specifically, we investigated whether they would compensate for subtle sound changes that are permitted in their native language. We concentrated on the phenomenon of assimilations. In English, for instance, a word with alveolar sounds like te(n) can adapt to labial or velar sounds following it, and thus be pronounced as te(m) in the sequence 'tem pounds'. French has a similar rule changing voiceless sounds like (t) into (d) and vice versa.

We used a preferential looking procedure to monitor normal-hearing infants' eye gaze towards pictures while they heard assimilation. We found that, at the precocious age of two years, French infants already compensate for their native rule. In contrast, English infants of the same age do not compensate for the French rule, suggesting that the ability to cope with assimilations is already influenced by language-specific processes at 2 years of age (Skoruppa et al., in revision).

We also tested 4-to-8-year-old English-learning children with hearing impairment on native language assimilations in a computerised picture-pointing task. We found that unlike age-matched controls and younger normal-hearing children, they exhibited reduced sensitivity to subtle acoustic differences (e.g. (n) vs. (m)), and did not compensate for assimilations. However, a group of better performers with hearing aids showed good perception and signs of compensation for assimilation for easier sounds (like (t) vs. (p)), suggesting that, if the signal they get is good enough, deaf children's phonological abilities can attain the same level of sophistication as the ones of normal-hearing children (Skoruppa & Rosen, submitted).

In the second part of this project, we examined which sort of acoustic cues listeners can use to divide the speech stream into word unit in different circumstances. We focused on their use lax vowel constraint, that is, the fact that English words cannot end in a lax vowel like the short 'o' in 'pot'. Using a partial non-word repetition task, we found that English adult listeners can use this fact to divide up trisyllabic nonsense strings (like 'cheygupoy') into two words ('chey' + 'gupoy') in quiet. In noisy conditions and in vocoded speech (simulating the input provided by a cochlear implant), however, this ability is reduced (Skoruppa, Nevins and Rosen, in preparation). Furthermore, we found evidence suggesting that the lax vowel constraint is acquired very early during typical language development, since normal-hearing 9-month-old infants already seem to be sensitive to the difference between of words confirming to and violating the lax vowel constraints.

In summary, this project provides detailed insights into the phonological processing abilities in adults and children with hearing impairments, with potential implications for the development of new devices and rehabilitation methods.