CORDIS - EU research results
CORDIS

Test, Predict, and Improve Musical Scene Perception of Hearing-Impaired Listeners

Article Category

Article available in the following languages:

Enhancing music perception for individuals with hearing loss

Previous analyses in acoustics have shown that hearing-impaired listeners have immense difficulties in understanding speech in noisy environments. Yet, we know only little about the effects of hearing loss on music perception.

Society icon Society

How does music sound to someone with hearing loss? A seemingly effortless process for a normal-hearing audience may transform into a strenuous, even disturbing, experience for a hearing-impaired one. The human auditory system manages to organise musical information according to the principles of auditory scene analysis, a mechanism of structuring sound into perceptually meaningful elements. However, the operation of disentangling simultaneous streams of sound is a real challenge for hearing-impaired individuals. The EU-funded ΤIMPANI project modelled the effects of hearing loss on musical auditory scene analysis and developed compensatory music processing strategies for hearing aids. Project coordinator Kai Siedenburg explains the rationale behind this research: “Beyond the basic instinct of protecting one’s ears against unduly loud sounds, the fact that the musical experience itself can be altered by hearing impairment is not widely debated, even though it affects many people around the world. The same holds for the music sciences and the field of music psychology, in particular where hearing impairment has remained an underdeveloped topic at the outskirts of the field. However, neglecting hearing loss is not sustainable.” The research was undertaken with the support of the Marie Skłodowska-Curie programme.

Listening to words vs melodies

Irrespective of the type and cause of hearing loss, hearing can always be improved by hearing aids, which are small wearable devices that convert sound waves to electrical or digital signals. “Current hearing aids are optimised for speech perception. However, music signals are much more diverse compared to speech signals. In terms of the frequency range, music makes use of a much greater effective frequency range, extending roughly from 20 to 12 000 Hz, compared to speech, which has an effective frequency range of about 100 to 4 000 Hz. In terms of dynamic range (contributing to differences in loudness), concert music can be very soft and very loud (up to 120 decibel (dB)) compared to speech, which usually inhabits a relatively small range around 65 dB sound pressure level,” Siedenburg elaborates. “Current hearing aids usually have dedicated music programmes, but these music programmes have not been shown to significantly improve musical sound quality. In scientific surveys, hearing aid users reportedly complain about a lack of clarity of music through hearing aids.”

Adding music in the toolkit for inclusivity

Amongst the remarkable evidence that TIMPANI provided is the finding that age-related hearing loss is associated with a worsening of musical scene analysis abilities. The findings subsequently led to suggested plans for designing hearing aid algorithms for music. The inclusion of hearing-impaired individuals in the cultural resource of music listening and making is a multifaceted endeavour. “My goal is to put hearing impairment on the agenda of music psychology. When it comes to technology, we wish to advance hearing devices by developing new strategies for music processing that can be adapted for people with a hearing impairment,” adds Siedenburg. An overview of the fellow’s groundbreaking work can be found in his webpage.

Keywords

TIMPANI, music, hearing loss, sound, speech, hearing aids, hearing-impaired, hearing impairment, auditory scene analysis, music psychology

Discover other articles in the same domain of application