How ‘hearables’ track health by analysing bodily sounds
Everyday wearable devices are proving a game changer for healthcare, with advanced sensors continuously monitoring aspects of wearers’ health. And as sensors drop in price, they will increasingly offer the kind of data-gathering previously only afforded by expensive medical equipment. “This kind of longitudinal and fine-grained data is unprecedented in health tracking,” notes Cecilia Mascolo(opens in new window) from the University of Cambridge(opens in new window) and coordinator of the EAR(opens in new window) project. EAR was funded by the European Research Council(opens in new window). EAR’s researchers have developed a range of artificial intelligence (AI)-enabled audio-based mobile health trackers, including earbuds that collect physiological information and a mobile phone app that monitors respiratory health. Compared to AI for video and image analysis, its use for audio is underdeveloped, meaning the team had to develop novel workarounds. “We are at the forefront of using AI to interpret audio data for health monitoring, using existing datasets and transfer learning to pretrain our models or existing pretrained models fine-tuned for our purposes using self-supervised learning,” explains Mascolo.
Tracking general physiological health
Testing the hypothesis that the ear canal could prove an ideal location for a wearable health tracker, EAR developed an algorithm to repurpose earbud microphones to detect bodily sounds associated with health. To test how well the earbuds could monitor users’ gait and other physiological aspects, such as breathing and heart rate, volunteers participated in a variety of activities, including running and using rowing machines. Drawing on other larger datasets, machine learning techniques trained an AI algorithm to identify and interpret the relevant health indicators in the audio. “We have demonstrated that ‘hearables’ are good at collecting health-related audio which has indeed proven valuable for reflecting physiological health, thanks to our novel AI,” adds Mascolo.
Tracking respiratory health
To help during the COVID-19 pandemic, the team wanted to see if a phone app could be developed to detect the presence of the coronavirus, using audio samples collected by the phone’s microphone. One of the largest multimodal audio datasets of oral sounds(opens in new window) (breathing, coughing and speaking) was crowdsourced, combined with information about COVID-19 test status, symptoms and wider medical history. This dataset is now informing machine learning models designed to forecast if an infected person will likely get better or worse, based on audio samples given regularly through the app. Using the COVID-19 data, alongside other public health data, the team has also built the first audio-based respiratory model, OPERA(opens in new window), enabling researchers to perform tasks for which they had otherwise only limited data, such as assessing chronic obstructive pulmonary disease. “Our COVID-19 data collection is unprecedented and remains in demand, with huge potential for this technology to track respiratory infections and chronic diseases over time,” says Mascolo.
Longitudinal health monitoring at scale
EAR’s innovative ‘hearable’ health solutions offer an affordable and scalable way to monitor individual and population health, significantly contributing to the EU’s personalised and preventative health ambitions. “We are now researching how audio in general, especially audio from the ear could yield additional health information as yet undetected or costly to detect otherwise,” adds Mascolo. In the meantime, a patent has been filed for the earbuds (for respiration rate monitoring), with Mascolo also joining a start-up working on in-ear microphone physiology monitoring through AI.