Skip to main content
European Commission logo print header

Computational Analysis of Everyday Soundscapes

Project description

Innovative computational methods for environmental sounds descriptions

Everyday sounds can provide valuable information about our environment and the events occurring within it. However, current technology struggles to identify individual sound sources in complex soundscapes where multiple sounds are present and distorted by the surrounding environment. To address this issue, the EVERYSOUND project, funded by the European Research Council, aims to develop computational methods capable of automatically providing high-level descriptions of environmental sounds. The project will use innovative techniques such as joint source separation and robust pattern classification algorithms to reliably recognise multiple overlapping sounds. Additionally, a hierarchical multilayer taxonomy will be developed to accurately classify everyday sounds. The project’s results will provide valuable tools for geographical, social, cultural and biological studies.

Objective

Sounds carry a large amount of information about our everyday environment and physical events that take place in it. For example, when a car is passing by, one can perceive the approximate size and speed of the car. Sound can easily and unobtrusively be captured e.g. by mobile phones and transmitted further – for example, tens of hours of audio is uploaded to the internet every minute e.g. in the form of YouTube videos. However, today's technology is not able to recognize individual sound sources in realistic soundscapes, where multiple sounds are present, often simultaneously, and distorted by the environment.
The ground-breaking objective of EVERYSOUND is to develop computational methods which will automatically provide high-level descriptions of environmental sounds in realistic everyday soundscapes such as street, park, home, etc. This requires developing several novel methods, including joint source separation and robust pattern classification algorithms to reliably recognize multiple overlapping sounds, and a hierarchical multilayer taxonomy to accurately categorize everyday sounds. The methods are based on the applicant's internationally recognized and awarded expertise on source separation and robust pattern recognition in speech and music processing, which will allow now tackling the new and challenging research area of everyday sound recognition.
The results of EVERYSOUND will enable searching for multimedia based on its audio content, which is not possible with today's technology. It will allow mobile devices, robots, and intelligent monitoring systems to recognize activities in their environments using acoustic information. Producing automatically descriptions of vast quantities of audio will give new tools for geographical, social, cultural, and biological studies to analyze sounds related to human, animal, and natural activity in urban and rural areas, as well as multimedia in social networks.

Host institution

TAMPEREEN KORKEAKOULUSAATIO SR
Net EU contribution
€ 1 500 000,00
Address
KALEVANTIE 4
33100 Tampere
Finland

See on map

Region
Manner-Suomi Länsi-Suomi Pirkanmaa
Activity type
Higher or Secondary Education Establishments
Links
Total cost
€ 1 500 000,00

Beneficiaries (1)