Community Research and Development Information Service - CORDIS


MAHNOB Report Summary

Project ID: 203143
Funded under: FP7-IDEAS-ERC
Country: United Kingdom

Final Report Summary - MAHNOB (Multimodal Analysis of Human Nonverbal Behaviour in Real-World Settings)

.Tools for human behavior analysis, which were typically proposed at the time when MAHNOB project started (2008), could handle only deliberately displayed, exaggerated expressions. As they were usually trained only on series of such exaggerated expressions, they lacked models of human expressive behavior found in real-world settings and could not handle subtle changes in audiovisual expressions typical for such spontaneous behavior. The main aim of MAHNOB project (September 2008 – August 2013) was to address this problem and attempt to build automated tools for machine interpretation of human naturalistic behavior.
A critical issue in machine analysis of human naturalistic behaviour is that human face, body, and vocalisations exhibit complex and rich dynamic behaviour that is all nonlinear, time varying, and context dependent. The research carried out during MAHNOB project has addressed some of the critical challenging issues in the area.
MAHNOB’s team efforts ( extended the state of the art in automatic facial behavior analysis in several directions including the accuracy and robustness of face and facial feature detection and tracking, the efficiency and accuracy of automatic recognition of facial muscle actions, and the extent and the accuracy of automatic recognition of temporal phases and intensity of facial muscle actions. The importance of the latter work is really exceptional because machine analysis of behavioral dynamics is crucial for analysis and correct interpretation of complex behaviors including emotions, pain and depression.
MAHNOB’s team members are also pioneers in the research on body and multimodal naturalistic behavior analysis. They proposed the first prediction-based audiovisual approach to discrimination between speech and laughter, the first audiovisual and the first visual approach to continuous dimensional affect recognition in valance-arousal space, the first approach to automatic discrimination between agreement and disagreement episodes based on nonverbal behavioral cues, and a method for robust analysis of human bodily activity from unsegmented video sequences. The later works independently of camera motion, clutter, and occlusion, being one of the pioneering works able to attain this level of robustness.
MAHNOB team proposed a number of novel and truly unconventional computer vision and machine learning methodologies including Image Gradient Orientation (IGO) based subspace learning with a cosine-based distance measure, which has been shown to significantly outperform traditional subspace learning methods on a variety of object recognition and tracking tasks. They also proposed Infinite Hidden Conditional Random Fields, which in contrast to the original Hidden Conditional Random Fields can automatically learn the optimal number of hidden states for the target classification task. They extended Gaussian Process (GP) Regression to Coupled GP Regression, which learns a set of coupled functions that “share the knowledge”, i.e., that take into account the correlations between each other. They also extended the Hidden Conditional Ordinal Random Fields (HCORF) to allow simultaneous recognition of facial expression and their intensities and account for the differences in subjective facial displays. They also proposed novel Canonical Time Warping methodologies for solving the problems of temporal alignment and fusion of multiple data sequences. These address the critical problem of fusing multiple continuous expert annotations (e.g. in terms of valence and arousal), each of which has its own temporal lag, amplitude bias, and may be noisy.
MAHNOB team also released four databases of multimodal recordings of human spontaneous behaviour captured while subjects were watching multimedia material or were involved in dyadic interactions. To the best of our knowledge, these are the very first databases of their kind (

Related information

Reported by

United Kingdom
Follow us on: RSS Facebook Twitter YouTube Managed by the EU Publications Office Top