The topic of audio-visual speech processing has attracted significant interest over the past 15 years. Relevant research has been focusing on recruiting visual speech information, extracted from the speaker's mouth region, as a means to improve robustness of traditional, unimodal, acoustic-only based speech processing. Nevertheless, to-date, most work has been limited to ideal-case scenarios, where the visual data are of high-quality, typically of steady frontal head pose, high resolution, and uniform lighting, while the audio signal contains speech by a single subject, in most cases artificially contaminated by noise in order to demonstrate significant improvements in speech system performance. Obviously, these conditions remain far from unconstrained, multi-party human interaction, thus, not surprisingly, practical audio-visual speech systems have yet to be deployed in real life. In this proposal, we aim to work towards expanding the state-of-the-art from the ideal “toy” examples to realistic human-computer interaction in difficult environments like the office, the automobile, broadcast news, and during meetings. Successful audio-visual speech processing there requires progress beyond the state-of-the-art in processing and robust extraction of visual speech information, as well as its efficient fusion with the acoustic modality, due to the varying quality of the extracted stream information. We propose to study a number of speech technologies in such environments (e.g. speech recognition, activity detection, diarization, separation), which stand to benefit from multimodality. The envisaged work will span 42 months of activity, and is planned as a natural evolution of research efforts of the candidate, Dr. Gerasimos Potamianos, while at AT&T Labs and IBM Research in the US, to be conducted jointly with the host organization, the Institute of Informatics and Telecommunications at the National Center of Scientific Research, "Demokritos", in Athens, Greece.
Call for proposal
See other projects for this call