Skip to main content

Emotionally-based Tagging of Multimedia Content

Article Category

Article available in the folowing languages:

Automatic music indexing

An EU group studied music auto-indexing methods whereby machines interpret listeners' expressions and movement to generate descriptive search tags. The project also evaluated methods of reading brain waves to detect emotional response.

Industrial Technologies

The proliferation of online music sources makes finding specific content increasingly difficult. Automatic indexing methods rely on the existence of extensive tagging, which may not exist. A potentially more effective method involves automatic tagging of music content by detecting the listeners' emotional reaction as they listen. Machines read human body language and facial expressions, thus generating the tagging data. The EU-funded project 'Emotionally-based tagging of multimedia content' (EMOTAG) aimed to develop and evaluate such an affect-sensitive implicit tagging system. In particular, the project investigated whether user behaviour could suggest tags and whether that approach might improve automatic tagging. The team also investigated the performance benefits of using such methods, and various efficient machine-learning techniques. The two-year undertaking concluded in April 2014. Initial research included analysis of user response to mismatching tags. By combining brain scans of several individuals, the team was able to identify the brain response indicating a mismatch. However, eye-gaze patterns proved a more reliable method of detection. Researchers first analysed spontaneous responses to emotional videos. Subsequent work focused on detection of continuous emotions from brain waves and facial expressions. From combining the methods, the team concluded that the most emotionally informative part of electroencephalographic signals is the interference of facial muscles during expressions. The project identified the most effective method for detecting the effect, and achieved state-of-the-art performance in such detection. The group also developed a new data set for continuous emotional characterisation of music. The investigation concluded that deep recurrent neural networks also effectively capture the dynamics of music. EMOTAG extended the automatic detection of human responses, and led to applications for auto-tagging and multimedia retrieval.

Keywords

Music indexing, emotional response, automatic indexing, tagging, multimedia content

Discover other articles in the same domain of application