Community Research and Development Information Service - CORDIS

Final Report Summary - HEARING MINDS (Hearing Minds: optimizing hearing performance in deaf cochlear implanted individuals)

Speech perception with a Cochlear Implant: the challenge
Cochlear implants (CIs) come with an internal part surgically placed in the mastoid portion of the temporal bone and an external part, the speech processor, which is typically fine-tuned (‘fitted’) by an audiologist to the individual needs of its user. This fitting process involves creating a set of instructions that defines the specific stimulation of the electrodes of the implanted array.
For the past few years, experts in the field have expressed the need for a new fitting process to optimize the patient’s hearing. Ideally, such a new fitting method would use psycho-acoustic feedback and other relevant input data from the implant user to fine-tune the speech processor settings and address many more parameters apart from the minimal and maximal current levels per electrode. Also, such a fitting method needs to allow multiple testing and fine-tuning depending on the changing needs of its user. Unfortunately, today such multiple testing and fine-tuning of CI speech processors is hardly ever done, due to a huge gap between the need and the availability of clinical audiological services.

Computer-led fine-tuning of the CI speech processor
To overcome this problem, the Hearing Minds research consortium has optimized a previously developed (semi-) automated fitting procedure by its industrial partner (FOX® Fitting to Outcome eXpert, proprietary software by Otoconsult NV, see This assisted fitting process has been shown to drastically reduce the number of man-hours of fitting during the lifetime of the device with qualitatively better outcomes.
The crucial element in this automated fitting procedure is to adjust the CI speech processor in such a way that the electrical activation of the nerve through the electrodes similar to what happens in a well-functioning cochlea. To do this, the Hearing Minds researchers drew on the principles of artificial intelligence. The artificial intelligence (AI) component operates as a navigation system in the car. Thanks to the outcomes of a battery of hearing tests, it "knows" the patient's hearing performance at all times. Moreover, it works towards pre-set goals that need to be achieved in each CI users. The AI fitting system is able to simulate millions of different parameter settings. For each new setting, it will use its built-in knowledge to predict how close it would bring the patient to a particular goal. The intelligent system is able to overlook all possibilities and to choose the optimal one, tailored to the individual needs of the patient.

A probabilistic approach
The AI system that was further optimized within the Hearing Minds project is based on the introduction of probabilistic graphical models (PGMs) in the fitting decision process. More precisely, an object-oriented Bayesian network model is used to represent the decision process in terms of probabilistic parameters that are learnt from a database of input-output relations. The results of these evaluation sessions show that the new ‘intelligent’ model is highly successful: the probabilistic fitting algorithms do not only outperform previously used deterministic alternatives but they are also able to optimize the patient’s hearing in a more efficient and accurate way than the manual fitting as done by an expert-audiologist.
This new approach is innovative both with respect to its scientific content as well as its potential clinical application. With respect to the first it should be mentioned that the Hearing Minds researchers used a new type of probabilistic graphical model (partially-observable Markov decision process, POMDP) that – to the best of their knowledge – has not been used previously in medical decision making. Building this model has been straightforward, but its evaluation has been relatively hard, due to its size. More precisely, the model involves many agents and each agent of them may be in charge of tuning a single parameter of the hearing device.

To measure is to know
To find out whether the automatic adjustment of the CI also leads to better improved hearing performance, it is essential to assess the quality of the patient's hearing in great detail and with great precision. For adults it is possible to assess whether they are able to understand words or short utterances in their native language. For very young children however, similar speech test materials are virtually non-existent. The Hearing Minds researchers were thus confronted with the following problem: "How can speech-language pathologists or audiologists reliably determine whether age-appropriate speech understanding has improved in children younger than six?" Speech repetition tasks that are suitable for adults are often too complex for small children. After all, many components of speech and language grammar are still under development until the age of six. This is especially true for children with a hearing impairment who often show a delay in spoken language development.
Against this background the Hearing Minds researchers have developed new speech test materials based on words and short utterances that take into account the cognitive and linguistic abilities of preschoolers. The sounds, words and sentence structures that are part of these test materials have been selected in such a way that they match the language age of the target group. Contrary to adults, the test procedure does not require the child to repeat the stimulus that is presented to them verbally. Instead, children first listen to a short utterance and will then click on the image that best fits the auditory stimulus. The utterances contain target words that make up phonemically minimal pairs with each other (e.g. boat - goat). This way, it is possible to assess speech understanding in realistic linguistic contexts up to the level of individual sounds even if the child is not able to repeat the speech stimulus itself. Similarly, for elderly listeners it is not straightforward to assess whether reduced speech understanding is univocally due to reduced hearing or whether they are rather due to non-auditory factors. Research within the Hearing Minds consortium has confirmed that lexical and syntactic features of the target language system may increase the cognitive demands to process sentences in noise (Coene et al 2016). In combination with hearing loss, this may lead to suboptimal functional hearing in day-to-day listening situations even for patients with good speech discrimination outcomes. In view of providing a more realistic estimate of hearing performance in listening environments that resemble more closely day-to-day communication contexts, new speech test materials were therefore developed for adults as well, taking into account the linguistic complexity of the test sentences, as it was expected that test materials with varying degrees of syntactic complexity would provide useful information with respect to the subjective benefits of particular hearing devices for the patient (Coene et al 2017, Krijger et al 2017).

The final result: optimized hearing
Most often, CI users obtain good speech understanding results when their CI speech processor has been manually adjusted to their individual needs. This means that they can understand about 60% to 70% of the words spoken to them in a normal conversational setting. These figures are obviously quite good, but nevertheless leave sufficient room for improvement. Thanks to the use of artificial intelligence in computer-led automatic CI fitting, today a deaf-born child is able to reach an average speech understanding accuracy of 90%; in deaf-born adults the average now reaches 82%. This is about 20% better than 8 years ago. In addition, automated CI fitting is also much faster than the traditional manual method, and paves the way to completely remote CI services in the future.
As such, the results of the Hearing Minds project are expected to help increase the quality of hearing in individuals who are deaf or hard-of-hearing, including two vulnerable groups of small children and elderly adults. Optimized hearing will enable infants and toddlers to enhance speech and oral language development while elderly adults will be able to remain independent and to participate in day-to-day communication in a hearing society.

Reported by



Life Sciences
Follow us on: RSS Facebook Twitter YouTube Managed by the EU Publications Office Top