Skip to main content

A Model for Predicting Perceived Quality of Audio-visual Speech based on Automatic Assessment of Intermodal Asynchrony

Objective

In recent years, there has been a marked increase in communication technologies and computer interfaces that operate within the audio-visual speech domain, (e.g. video-telephony, synthesised avatars, etc). Faithful synchrony between the visual and acoustic speech elements of such technologies is of great importance in ensuring that they are perceived by end-users as operating at high and optimal quality levels. The effect of intermodal asynchrony on user-perceived quality is typically assessed using subjective evaluation techniques. A system for automatically assessing asynchrony levels, and predicting quality degradation on that basis, would therefore be both desirable and useful, and will have direct application to techniques for automatic synchrony adjustment.
The proposed project will examine audio-visual speech as both spoken naturally by humans and as artificially synthesised by machines, and will employ subjective assessment techniques and machine learning in a combined iterative semi-automatic strategy for producing a Quality Prediction Model. Different levels of intermodal asynchrony will first be assessed by human subjects, who will be required to score the effect of the asynchrony levels on perceived speech quality using standardised
techniques that will be modified for use with multimodal speech. Asynchrony patterns and their corresponding subjective assessment scores will be automatically learned by machines, resulting in an initial Quality Prediction Model. The initial model will be tested using data that will be simultaneously assessed by humans, using the subjective assessment techniques, above. The
output from the prediction model will be directly compared with the subjective scores, providing an initial evaluation of the model's performance. The model will be adjusted on this basis, and re-trained using new data. The process of re-train, re-test, re-score, will be repeated iteratively, leading to a more robust quality prediction model.

Field of science

  • /engineering and technology/electrical engineering, electronic engineering, information engineering/information engineering/telecommunications/telecommunications network
  • /humanities/languages and literature/linguistics/phonetics
  • /natural sciences/computer and information sciences/artificial intelligence/machine learning

Call for proposal

FP7-PEOPLE-2010-IEF
See other projects for this call

Funding Scheme

MC-IEF - Intra-European Fellowships (IEF)

Coordinator

TECHNISCHE UNIVERSITAT BERLIN
Address
Strasse Des 17 Juni 135
10623 Berlin
Germany
Activity type
Higher or Secondary Education Establishments
EU contribution
€ 155 542,40
Administrative Contact
Simone Ludwig (Ms.)