European Commission logo
italiano italiano
CORDIS - Risultati della ricerca dell’UE
CORDIS

Audio-based Mobile Health Diagnostics

Periodic Reporting for period 2 - EAR (Audio-based Mobile Health Diagnostics)

Periodo di rendicontazione: 2021-04-01 al 2022-09-30

The project covers the general aims of understanding how sounds from our bodies can be gathered and used to improve the automated (or semi automated) diagnosis of diseases. In particular, the project focuses on how sounds can be collected from wearable devices, which people already carry with them or could carry with them every day. Advances in this research have the potential to offer affordable, prompt, and possibly accurate continuous care to patients and, possibly, fill the gaps which clinical disciplines find hard to tackle.
The improvement of care, the scalability of it to populations which can't possibly afford the current expensive standards, and the early diagnosis is important to society for a variety of reasons which can summarised into "better population health": scalable and early diagnosis are key to this.

The overall objectives of the project are and have been to:
-collect data related to audio for diagnostics of health through mobile devices so that models can be trained accurately.
-develop machine learning models for this type of data, also considering uncertainty estimation which would improve the interaction with the clinical practice, avoiding relying only on accuracy.
-improve on device machine learning and audio sensing to improve the possibility of keeping the data close to the individual and helping maintaining needed privacy standards.
-integrate multi modal machine learning, which in addition to audio includes other modalities for the sensing.
Include a global overview of the action's implementation for the reporting period including the major achievements and elaborate on any problems incurred (no more than ½ page).
In this reporting period the project produced research involving both the development of analytics methods for audio to diagnose respiratory and cardiac pathologies but also research related to the use of audio sensing to monitor behaviour and activities in ear-worn devices and other body worn devices.
In terms of WP1, we have continued the COVID-19 sounds data collection and published a paper describing the dataset (in NeurIPS 2021). As part of this, we have continued to share the data with academic institutions requesting it: we have shared it more than 300 times. We have also collected data of studies involving participants wearing a digestive sound collection belt, as well as participants wearing earables with in-ear microphones. x
In terms of WP2, we have explored the data analysis of COVID-19 sounds further and published several works: notably we have highlighted the realistic performance of such data and have started exploring longitudinal progression of disease diagnostics as well as uncertainty estimation performance. We have also analyzed digestive sounds data for stress detection.
In WP3, we have deepened our knowledge in the use of in ear audio for activity recognition, gesture recognition and user identification. We have also started working physiological signal detection. We worked further on on-device machine learning especially in the context of continual learning.
In WP4 we have advanced the work, we have worked on sensor fusion to augment the knowledge acquired with audio and improve performance on device.
We have contributed beyond the state of the art in various areas.
In particular, the COVID-19 Sounds work was one of the earliest attempts at providing contactless, automatic and effortless COVID-19 diagnostics through machine learning modelling. Our data collection is possibly the largest crowdsourced one too.
We have been one of the first groups analyzing this type of data with many outputs on techniques for realistically tackle this problem.

The work which we have done this year, on progression of disease forecasting is unique in its kind: we have a unique dataset and we have developed techniques, which based on a single user data can explore how a respiratory disease is progressing, for example if the person is degenerating or if they are improving.

The work on exploring sounds from the wearables for the ear has generated interest in the mobile systems community with two very high impact publications. We are in the process of exploring how to go beyond activity and look at physiological signals such as heart rate and heart rate variability.
We have started working also with digestive sounds and analysing how these can be related to stress.

In terms of expected results for the rest of the project:
-we are further working on refining the COVID-19 sounds to monitor progression. This approach is very unique as we have collected a very unique progression dataset.
-in terms of digestive systems sounds devices and analysis: we plan to extract physiological signal from these sounds.
-we are focusing on wearable for the ear devices both at the level of sensing health from inner ear microphones but also at the level of on device machine learning for this very small device. This will also allow us to consider involving further sensor input into the modelling.
web page image of our covid sounds data collection page