Skip to main content
European Commission logo
English English
CORDIS - EU research results
CORDIS
CORDIS Web 30th anniversary CORDIS Web 30th anniversary

Robust Speech Encoding in Impaired Hearing

Periodic Reporting for period 4 - RobSpear (Robust Speech Encoding in Impaired Hearing)

Reporting period: 2021-04-01 to 2022-03-31

The prevalence of hearing impairment amongst the elderly is a stunning 33%, while the younger generation is sensitive to noise-induced hearing loss through increasingly loud urban life and lifestyle. Yet, hearing impairment is inadequately diagnosed and treated because we fail to understand how the components that constitute a hearing loss impact robust speech encoding. In 2009, a ground-breaking discovery demonstrated that the most sensitive structures of the cochlea are the auditory-nerve fibers which synapse onto the inner-hair-cells. Until then, it was believed that damaged outer-hair-cells were the dominant source of sensorineural hearing loss, and that its diagnosis through a standard clinical audiogram sufficiently characterized listening difficulties among the hearing impaired. This new type of sensorineural hearing loss - cochlear synaptopathy (or cochlear neuropathy)- can occur after ageing, noise-exposure or ototoxic drugs and permanently degrades the quality with which audible sound can be processed in challenging listening backgrounds (such as noisy restaurants). Because synaptopathy occurs before outer-hair-cells are damaged, its prevalence among the ageing and noise-exposed society is expected to be high, and much higher than predicted by clinically abnormal audiograms. Synaptopathy thus poses a challenge towards understanding how sensorineural hearing loss results in reduced speech perception, because (i) it could hitherto only be quantified using post-mortem histology techniques, and (ii) synaptopathy and outer-hair-cell deficits have different functional consequences for sound encoding while state-of-the-art hearing-aid algorithms do not account for synaptopathy in their fitting strategies.

RobSpear aimed to (i) develop non-invasive methods that quantify synaptopathy in humans and that can, in the future, be adopted in clinical hearing diagnostics. The hearing profile we developed quantifies both the outer-hair-cell and synaptopathy aspect of sensorineural hearing loss using auditory EEG based methods to yield an individualized hearing loss profile. The profile is much more specific than present clinical practice and can be used to select the best-matching hearing loss treatment. By means of a combined computational modelling, EEG and sound perception approach (ii), we furthermore studied how synaptopathy affects the robust coding of sound and speech in noisy listening scenarios and conclude that it is necessary to consider synaptopathy in future treatments because its functional consequences for processing fast temporal fluctuations in sound (as found in speech) are substantial in older listeners even when their audiograms are normal. This implies that a much larger group of older listeners with self-reported hearing difficulties, but with clinically normal audiograms, might become candidates for hearing loss treatments and this works towards the WHO goals of early-diagnosis and treatment as a cost-effective measure to reduce the societal burden of hearing impairment. In a last step (iii), the RobSpear project adopted the hearing loss profile from (i) to yield individualized computational models of auditory processing that can be used as front-ends for individualized hearing-loss algorithms that compensate for the synaptopathy and outer-hair-cell loss aspect of sensorineural hearing loss. We achieved a breakthrough in closed-loop hearing-aid algorithm design through the unique combination of fully backpropagating neural-network implementations of individualized auditory models with our novel, auditory physiology-based methods for sensorineural hearing loss quantification. For the first time, we developed and tested synaptopathy-specific sound processing algorithms that can extend the current range of hearing-aid algorithms and can be incorporated within future hearables to help improve robust speech encoding in impaired hearing.
We simultaneously progressed on three topic areas: (i) developing brainstem-EEG methods to diagnose and quantify synaptopathy in humans, (ii) understanding the relative weight of synaptopathy and outer-hair-cell deficits in degrading sound and speech perception after hearing damage, and (iii) developing a model- and machine-learning-based framework to design individualized hearing restoration algorithms which compensate acoustically for the synaptopathy and outer-hair-cell loss aspect of sensorineural hearing damage.

(i) We developed auditory stimuli that quantify synaptopathy non-invasively by conducting numerical simulations with a signal-processing model of the human auditory processing which simulates brainstem EEG responses and their changes due to either outer-hair-cell loss or synaptopathy. We identified two stimulation paradigms, one based on the derived-band envelope-following response (Keshishzadeh et al., 2019) and another based on using square-wave envelope-following responses (Vasilkov et al, 2021). We validated the quality of our stimuli and EEG markers of synaptopathy experimentally in listeners with normal audiograms, impaired audiograms, tinnitus or with self-reported hearing difficulties (Keshishzadeh et al., 2019, Vasilkov et al., 2021 and Verhulst et al., 2022). We filed for a PTC application that describes the optimal stimulation and EEG analysis method (WO2021156465A1 – claims novel and inventive) and were granted an ERC Proof-of-concept project in which we further optimized the testing procedure for future clinical use: We optimized the stimulation paradigms and test duration to yield a fast and robust individual quantification of synaptopathy and tested the method (“The CochSyn test”) on a large cohort of patients suspected to have age-related or ototoxic-induced synaptopathy to build a clinical reference dataset. This dataset shows that significant age-induced synaptopathy can occur from the age of 45-50 even though the standard pure-tone audiogram remains within the clinically normal-hearing range, demonstrating the need to diagnose and treat synaptopathy as an early indicator of sensorineural hearing damage. At the same time, we validated that our method is sensitive to Kainic-Acid induced synaptopathy in an animal model in a collaboration with Rochester University (Garrett et al., 2019 preprint) and set up a collaboration with Harvard medical school who included our marker in their study of hearing disorders in clinically normal-hearing study participants (Mepani et al., 2021). The PTC application and results of ERC-PoC project were used to apply for a 2021 EIC transition grant “EarDiTech”. The diagnostic project aims were concluded with three methods that allow for calibrating the sensorineural hearing loss parameters in individual signal-processing models of auditory processing (Keshishzadeh et al. 2021a,b; Buran et al. 2021) that are used in (ii) and (iii).

(ii) Because we (uniquely) include listeners with and without outer-hair-cell loss in our studies to investigate the role of cochlear synaptopathy in degraded sound perception, we were able to show that synaptopathy is much more detrimental to degrading the perceptual cues necessary to perform two basic auditory perception tasks, namely amplitude-modulation detection and tone-in-noise detection (Verhulst et al., Acta Acustica, 2018, Osses et al. ICA, 2019). We used a model-based approach which includes both synaptopathy and OHC loss simulations to conclude this finding, and our results imply that it is not outer-hair-cell loss, but rather synaptopathy (which co-exists with outer-hair-cell deficits) which has a detrimental effect on the temporal precision with which audible sound is processed. Furthermore, our three additional studies extrapolate this finding to the high-pass portions of speech-in-noise encoding (Garrett et al., in prep, Mepani et al., 2021; Verhulst et al., 2022), providing the first evidence that synaptopathy is important for speech encoding and is reflected in both brainstem EEG metrics as well as in sound perception. In tinnitus patients we furthermore found that treating their high-frequency hearing deficits can be done similarly to non-tinnitus patients, but that they have improved low-frequency hearing-in-noise capabilities than non-tinnitus patients of the same age (Verhulst et al., 2022). To better understand whether the loss of high-frequency outer-hair-cells (OHC) or the synaptopathy aspect is more detrimental to sound encoding, we studied the relationship between speech intelligibility and speech-evoked FFRs (Wartenberg et al., 2022 preprint) as well as our synaptopathy-sensitive RAM-evoked EFRs (Verhulst et al., 2022; Drakopoulos et al. 2022, preprint and in review). Because synaptopathy precedes OHC damage in the progression of age-related sensorineural hearing loss, and both pathologies tend to be present in our older study participants, we need to examine the success of treating one or the other pathology to determine which aspect is more functionally relevant for speech coding. In Drakopoulos et al (2022, preprint and in review), we designed audio-signal processing methods that compensate for the functional loss created by synaptopathy and show that this type of processing improves perceptual amplitude-modulation detection and temporal envelope processing as assessed using the EFR. Though promising, the peripheral auditory coding improvements following our treatment did not always yield improved speech intelligibility in our tested cohort, and this aspect requires further investigation to offer a suitable and effective individualized treatment to all patients. We are furthering this project in a follow-up ERA-NET Neuron Consortium that the PI of this project initiated and now coordinates.

(iii) We first set up a closed-loop framework to enable the development of model and machine-learning based individualized hearing algorithms. Using the hearing loss profile from (i), we developed three methods that extract individualized model parameters of synaptopathy and outer-hair-cell loss based on objective auditory measures (Keshishzadeh et al., 2021a,b; Buran et al., 2021). These individualized models are then placed in an optimization loop with which we can optimize the signal processing that needs to be applied to the audio to yield transformed audio at the output of the hearing-impaired model that matches that of a reference normal-hearing model. If we present this transformed audio to a hearing-impaired patient, we aim to restore their peripheral auditory processing and speech intelligibility to that of normal-hearing listeners. One the one hand, we used simulated brainstem speech signals from the computational model to “manually” devise appropriate signal processing strategies (Drakopoulos et al., 2022, preprint and in review), and on the other, we are developed a machine-learning based method that allowed us to backpropagate through the system while minimizing a loss term at the level of the cochlea and the brainstem (Drakopoulos et al. 2022 ICASSP). To enable backpropagation, we first developed a neural network approximation of our normal-hearing computational model (CoNNear; Baby et al. 2021, Drakopoulos et al., 2021) that captures all relevant aspects of cochlear nonlinearities, coupling and signal phase and that can be computed in real time. We showed that our method offers a calculation speed up of a 2900x factor for cochlear processing and can successfully be used for back-propagation while offering the same functional capabilities than analytical transmission-line and Hodgkin-Huxley type of neuronal models. We filed two PTC applications related to CoNNear and the closed-loop systems (WO2020249532 and WO2021198438A1) and received positive search reports. We performed an objective evaluation of our novel type of machine-learning based audio signal processing methods (Drakopoulos et al. in prep.) and we have piloted a final experiment in which we test the entire framework on individuals from which we first extract a hearing loss profile, apply the framework, and evaluate whether the developed algorithms yield a perceptual improvement in speech intelligibility in noisy listening scenarios. Our findings formed the basis of a follow-up grant application “Machine Hearing 2.0” that was funded by the Flemish Research Council to further explore how bio-inspired audio-signal processing can yield more noise-robust automatic hearing systems (e.g. automatic speech recognition, noise reduction or sound source localisation for robotics applications).
Because this project adopts a computational modeling approach that takes direct physiology evidence (Kujawa and Liberman, 2019, Wu et al. 2018) into the model framework to predict how synaptopathy impacts human auditory brainstem EEG signals and sound perception, we were well positioned to force a breakthrough in the hearing research domain to develop EEG-based diagnostic metrics that can isolate the synaptopathy aspect of sensorineural hearing loss as well as acoustic signal processing methods that -for the first time- compensate for the synaptopathy aspect of sensorineural hearing impairment. While several labs are still adopting purely experimental approaches in humans to select promising stimuli for a non-invasive diagnosis of synapthopathy (e.g. Bharadwaj et al., 2015, Wojtczak et al., 2017, Bramhall et al., 2019), our model-based approach is faster, more specific and yielded two different stimulus sets (Keshishzadeh et al, 2020; Vasilkov et al., 2021) that can be used for this purpose. Several research labs are now using our open-source diagnostic stimuli under an academic license (Tübingen, Harvard, Montpellier) and the positive feedback on our WO2021156465A1 PTC application as well as successful ERC proof-of-concept and EIC transition projects reflect the leap we made in precision hearing diagnostics of synaptopathy. A particular advantage of our multi-disciplinary approach is that we also developed a method that uses our diagnostic methods (in a short experimental procedure) to calibrate the hearing loss parameters of individual models of hearing-impaired sound processing by combining our diagnostic procedures and computational auditory models with state-of-the-art machine-learning/numerical methods (Keshishzadeh et al., 2021 a,b; Buran et al. 2021). These individualised models can then be used in numerical closed-loop systems for the development of hearing-aid signal processing. Existing model-based signal processing methods for hearing aids have so far not focussed on including synaptopathy and neither incorporate the biophysical realism we realise due to computational constraints.

To bridge the gap between precision diagnostics and treatment, we first overcame the computational constraints that so far prohibited the uptake of biophysical models in closed-loop systems for hearing-aid development. We described these steps in two high-impact papers (Nature Machine Intelligence and Nature Communcations Biology) and in two PTC applications that received *novel and inventive claims* in their search reports, supporting the novelty of our methods beyond the state-of-the-art auditory models and hearing-aid development systems. A key factor in our breakthrough was the combination between our expertise in computational transmission-line cochlear and neuron models and the latest convolutional-neural-network (CNN) techniques that can approximate complex systems using differentiable equations. Specifically, we built, tested, and validated a CNN-based biophysical auditory model (CoNNear), which’ modules each represent different auditory structures (Baby et al., 2021, Drakopoulos et al., 2021) that can be made hearing impaired based on audiological and evoked potential data (Van Den Broucke et al., 2021; Keshishzadeh et al., 2021a,b). The CoNNear model operates in real-time with latencies <10 ms and offers a speed-up factor of 2900 against reference analytical biophysical models. Because it uniquely allows for backpropagation, we can use it in end-to-end systems for hearing-aid signal processing development (Drakopoulos et al., 2022) and thus offer a fully individualizable, biophysical realistic end-to-end system that is novel, performs on par with state-of-the-art hearing-aid signal processing methods (Drakopoulos et al., 2022 submitted), but that also includes the synaptopathy aspect of hearing impairment and can easily be embedded in real-time devices due to its CNN architecture.