Periodic Reporting for period 4 - RobSpear (Robust Speech Encoding in Impaired Hearing)
Reporting period: 2021-04-01 to 2022-03-31
RobSpear aimed to (i) develop non-invasive methods that quantify synaptopathy in humans and that can, in the future, be adopted in clinical hearing diagnostics. The hearing profile we developed quantifies both the outer-hair-cell and synaptopathy aspect of sensorineural hearing loss using auditory EEG based methods to yield an individualized hearing loss profile. The profile is much more specific than present clinical practice and can be used to select the best-matching hearing loss treatment. By means of a combined computational modelling, EEG and sound perception approach (ii), we furthermore studied how synaptopathy affects the robust coding of sound and speech in noisy listening scenarios and conclude that it is necessary to consider synaptopathy in future treatments because its functional consequences for processing fast temporal fluctuations in sound (as found in speech) are substantial in older listeners even when their audiograms are normal. This implies that a much larger group of older listeners with self-reported hearing difficulties, but with clinically normal audiograms, might become candidates for hearing loss treatments and this works towards the WHO goals of early-diagnosis and treatment as a cost-effective measure to reduce the societal burden of hearing impairment. In a last step (iii), the RobSpear project adopted the hearing loss profile from (i) to yield individualized computational models of auditory processing that can be used as front-ends for individualized hearing-loss algorithms that compensate for the synaptopathy and outer-hair-cell loss aspect of sensorineural hearing loss. We achieved a breakthrough in closed-loop hearing-aid algorithm design through the unique combination of fully backpropagating neural-network implementations of individualized auditory models with our novel, auditory physiology-based methods for sensorineural hearing loss quantification. For the first time, we developed and tested synaptopathy-specific sound processing algorithms that can extend the current range of hearing-aid algorithms and can be incorporated within future hearables to help improve robust speech encoding in impaired hearing.
(i) We developed auditory stimuli that quantify synaptopathy non-invasively by conducting numerical simulations with a signal-processing model of the human auditory processing which simulates brainstem EEG responses and their changes due to either outer-hair-cell loss or synaptopathy. We identified two stimulation paradigms, one based on the derived-band envelope-following response (Keshishzadeh et al., 2019) and another based on using square-wave envelope-following responses (Vasilkov et al, 2021). We validated the quality of our stimuli and EEG markers of synaptopathy experimentally in listeners with normal audiograms, impaired audiograms, tinnitus or with self-reported hearing difficulties (Keshishzadeh et al., 2019, Vasilkov et al., 2021 and Verhulst et al., 2022). We filed for a PTC application that describes the optimal stimulation and EEG analysis method (WO2021156465A1 – claims novel and inventive) and were granted an ERC Proof-of-concept project in which we further optimized the testing procedure for future clinical use: We optimized the stimulation paradigms and test duration to yield a fast and robust individual quantification of synaptopathy and tested the method (“The CochSyn test”) on a large cohort of patients suspected to have age-related or ototoxic-induced synaptopathy to build a clinical reference dataset. This dataset shows that significant age-induced synaptopathy can occur from the age of 45-50 even though the standard pure-tone audiogram remains within the clinically normal-hearing range, demonstrating the need to diagnose and treat synaptopathy as an early indicator of sensorineural hearing damage. At the same time, we validated that our method is sensitive to Kainic-Acid induced synaptopathy in an animal model in a collaboration with Rochester University (Garrett et al., 2019 preprint) and set up a collaboration with Harvard medical school who included our marker in their study of hearing disorders in clinically normal-hearing study participants (Mepani et al., 2021). The PTC application and results of ERC-PoC project were used to apply for a 2021 EIC transition grant “EarDiTech”. The diagnostic project aims were concluded with three methods that allow for calibrating the sensorineural hearing loss parameters in individual signal-processing models of auditory processing (Keshishzadeh et al. 2021a,b; Buran et al. 2021) that are used in (ii) and (iii).
(ii) Because we (uniquely) include listeners with and without outer-hair-cell loss in our studies to investigate the role of cochlear synaptopathy in degraded sound perception, we were able to show that synaptopathy is much more detrimental to degrading the perceptual cues necessary to perform two basic auditory perception tasks, namely amplitude-modulation detection and tone-in-noise detection (Verhulst et al., Acta Acustica, 2018, Osses et al. ICA, 2019). We used a model-based approach which includes both synaptopathy and OHC loss simulations to conclude this finding, and our results imply that it is not outer-hair-cell loss, but rather synaptopathy (which co-exists with outer-hair-cell deficits) which has a detrimental effect on the temporal precision with which audible sound is processed. Furthermore, our three additional studies extrapolate this finding to the high-pass portions of speech-in-noise encoding (Garrett et al., in prep, Mepani et al., 2021; Verhulst et al., 2022), providing the first evidence that synaptopathy is important for speech encoding and is reflected in both brainstem EEG metrics as well as in sound perception. In tinnitus patients we furthermore found that treating their high-frequency hearing deficits can be done similarly to non-tinnitus patients, but that they have improved low-frequency hearing-in-noise capabilities than non-tinnitus patients of the same age (Verhulst et al., 2022). To better understand whether the loss of high-frequency outer-hair-cells (OHC) or the synaptopathy aspect is more detrimental to sound encoding, we studied the relationship between speech intelligibility and speech-evoked FFRs (Wartenberg et al., 2022 preprint) as well as our synaptopathy-sensitive RAM-evoked EFRs (Verhulst et al., 2022; Drakopoulos et al. 2022, preprint and in review). Because synaptopathy precedes OHC damage in the progression of age-related sensorineural hearing loss, and both pathologies tend to be present in our older study participants, we need to examine the success of treating one or the other pathology to determine which aspect is more functionally relevant for speech coding. In Drakopoulos et al (2022, preprint and in review), we designed audio-signal processing methods that compensate for the functional loss created by synaptopathy and show that this type of processing improves perceptual amplitude-modulation detection and temporal envelope processing as assessed using the EFR. Though promising, the peripheral auditory coding improvements following our treatment did not always yield improved speech intelligibility in our tested cohort, and this aspect requires further investigation to offer a suitable and effective individualized treatment to all patients. We are furthering this project in a follow-up ERA-NET Neuron Consortium that the PI of this project initiated and now coordinates.
(iii) We first set up a closed-loop framework to enable the development of model and machine-learning based individualized hearing algorithms. Using the hearing loss profile from (i), we developed three methods that extract individualized model parameters of synaptopathy and outer-hair-cell loss based on objective auditory measures (Keshishzadeh et al., 2021a,b; Buran et al., 2021). These individualized models are then placed in an optimization loop with which we can optimize the signal processing that needs to be applied to the audio to yield transformed audio at the output of the hearing-impaired model that matches that of a reference normal-hearing model. If we present this transformed audio to a hearing-impaired patient, we aim to restore their peripheral auditory processing and speech intelligibility to that of normal-hearing listeners. One the one hand, we used simulated brainstem speech signals from the computational model to “manually” devise appropriate signal processing strategies (Drakopoulos et al., 2022, preprint and in review), and on the other, we are developed a machine-learning based method that allowed us to backpropagate through the system while minimizing a loss term at the level of the cochlea and the brainstem (Drakopoulos et al. 2022 ICASSP). To enable backpropagation, we first developed a neural network approximation of our normal-hearing computational model (CoNNear; Baby et al. 2021, Drakopoulos et al., 2021) that captures all relevant aspects of cochlear nonlinearities, coupling and signal phase and that can be computed in real time. We showed that our method offers a calculation speed up of a 2900x factor for cochlear processing and can successfully be used for back-propagation while offering the same functional capabilities than analytical transmission-line and Hodgkin-Huxley type of neuronal models. We filed two PTC applications related to CoNNear and the closed-loop systems (WO2020249532 and WO2021198438A1) and received positive search reports. We performed an objective evaluation of our novel type of machine-learning based audio signal processing methods (Drakopoulos et al. in prep.) and we have piloted a final experiment in which we test the entire framework on individuals from which we first extract a hearing loss profile, apply the framework, and evaluate whether the developed algorithms yield a perceptual improvement in speech intelligibility in noisy listening scenarios. Our findings formed the basis of a follow-up grant application “Machine Hearing 2.0” that was funded by the Flemish Research Council to further explore how bio-inspired audio-signal processing can yield more noise-robust automatic hearing systems (e.g. automatic speech recognition, noise reduction or sound source localisation for robotics applications).
To bridge the gap between precision diagnostics and treatment, we first overcame the computational constraints that so far prohibited the uptake of biophysical models in closed-loop systems for hearing-aid development. We described these steps in two high-impact papers (Nature Machine Intelligence and Nature Communcations Biology) and in two PTC applications that received *novel and inventive claims* in their search reports, supporting the novelty of our methods beyond the state-of-the-art auditory models and hearing-aid development systems. A key factor in our breakthrough was the combination between our expertise in computational transmission-line cochlear and neuron models and the latest convolutional-neural-network (CNN) techniques that can approximate complex systems using differentiable equations. Specifically, we built, tested, and validated a CNN-based biophysical auditory model (CoNNear), which’ modules each represent different auditory structures (Baby et al., 2021, Drakopoulos et al., 2021) that can be made hearing impaired based on audiological and evoked potential data (Van Den Broucke et al., 2021; Keshishzadeh et al., 2021a,b). The CoNNear model operates in real-time with latencies <10 ms and offers a speed-up factor of 2900 against reference analytical biophysical models. Because it uniquely allows for backpropagation, we can use it in end-to-end systems for hearing-aid signal processing development (Drakopoulos et al., 2022) and thus offer a fully individualizable, biophysical realistic end-to-end system that is novel, performs on par with state-of-the-art hearing-aid signal processing methods (Drakopoulos et al., 2022 submitted), but that also includes the synaptopathy aspect of hearing impairment and can easily be embedded in real-time devices due to its CNN architecture.