Skip to main content

Audio-based Mobile Health Diagnostics

Periodic Reporting for period 1 - EAR (Audio-based Mobile Health Diagnostics)

Reporting period: 2019-10-01 to 2021-03-31

The project covers the general aims of understanding how sounds from our bodies can be gathered and used to improve the automated (or semi automated) diagnosis of diseases. In particular the project focuses on how sounds can be collected on wearable devices, which people already carry with them or could carry with them every day. A solution to this problem would offer affordable, prompt, and possibly accurate continous care to patients and possibly fill the gaps which clinical disciplines find hard to tackle.
The improvement of care, the scalability of it to populations which can't possibly afford the current expensive standards, and the early diagnosis is important to society for a variety of reasons which can summarised into "better population health": scalable and early diagnosis are key to this.
The project so far has concentrated on various strands, in accordance with the initial work plan.

In terms of WP1, due to COVID-19 we have prioritised the large scale data collection more than the hospital data collection.
In particular, we have developed mobile phone software and collected (and are still collecting) a very large, crowdsourced from smartphones, dataset of respiratory sounds related to COVID-19. The data has been used already in a number of works which aim at exploring how machine learning can be used to perform automatic detection of covid 19, how progression of the disease evolves and what aspects of the disease are more meaningful in terms of audio.
The project has been very prominent in the news, with a large fraction of high end newspaper covering the project. The project was also covered by the ERC media office in various forms. The dataset to date contains more than 60,000 samples from 40,000 users.

We also are sharing the data we have already preprocessed with interested academic institution (we have shared more than 150 times), including through a data challenge organised in cooperation with the main audio conference IEEE INTERSPEECH.
We have also organised a workshop for the various groups interested in leveraging sounds for COVID-19 URL: which had more than 150 participants.

In terms of WP2, we have been analysing the data of the COVID-19 sound collection and produced some initial results which have been published. The work has advanced in terms of modelling and training for such noisy data: we have been looking at robustness aspects and are considering the longitudinal nature of the data too.

In terms of WP3, we have worked on understanding how microphones can be embedded in various devices and used to detect health. One output was a work devising a belt to be worn on the chest to detect breathing and heart sounds continuously. the published work won best paper at a prominent workshop. The other line of work has explored how inner ear microphones embedded in earables can be used for human activity detection, the work, while not published yet has been accepted at ACM MobiSys 2021. Both these works have started considering how sound analysis could be performed on device as much as possible but this aspect is one aspect we plan to also continue to investigate this year.
We have contributed beyond the state of the art in various area.
In particular, the COVID-19 Sounds work was one of the earliest attempts at providing contactless, automatic and effortless COVID-19 diagnostics through machine learning modelling. Our data collection is possibly the largest crowdsourced one too.

The work on body health and activity, while not final, has already offered publication material due to its novelty with respect to the state of the art.

In terms of expected results for the rest of the project:
-we are in the process of finalising further work on heart sound detection and modelling on data available publicly. We hope to still be able to collect our own dataset once COVID-19 subsides.
-we are further working on refining the COVID-19 sounds modelling and improve on model robustness as well as tackling aspects of model uncertainty which are so important for clinical consideration (WP4)
-we have initiated collaborations to look into digestive systems sounds devices and analysis
-we are focusing on wearable for the ear devices both at the level of sensing health from inner ear microphones but also at the level of on device machine learning for this very small device. This will also allow us to consider involving further sensor input into the modelling.
web page image of our covid sounds data collection page