CORDIS
EU research results

CORDIS

English EN
Conversational Multimodal Interaction with Computers

Conversational Multimodal Interaction with Computers

Objective

COMIC combines research in software and systems engineering, with human-human and human-computer interaction, cognitive psychology, and Human Factors to enable novel 3G mobile eCommerce and eWork services, in which computers, accessed from small portable terminals, act as a partner in collaborative problem solving. COMIC's R&D will contribute to safer and healthier working environments, strengthen Europe's leading position in mobile telecom services and thereby enlarge employment. To realise the long-term vision of an ambient intelligence landscape in which artificial agents are able to understand emerging intentions in mixed-initiative natural language conversations with a customer, COMIC will build subsequently more capable demonstrator systems, which are then used for Human Factors studies.

OBJECTIVES
COMIC has two main objectives, viz
1. Harness knowledge from Cognitive Psychology to develop software and tools needed to create novel eCommerce and eWork services, especially for small mobile multimodal terminals. This R&D will result in healthier and safer and more pleasant working conditions. By doing so, COMIC will strengthen Europe's leading position in the field of mobile telecom services;
2. Show the usability of the results for a range of novel services by a series of subsequently more powerful demonstrators that shows opportunities for novel eWork and eCommerce services. The demonstrators will be used to carry out Human Factors studies.

DESCRIPTION OF WORK
COMIC will combine extensive knowledge and expertise in Cognitive Psychology, in software and system development to design and perform experiments with human-human and human-computer interaction in language-centric multimodal environments. The experiments will be based on scenarios that can be tightly controlled, but that at the same time are relevant for eCommerce and eWork applications, in asymmetric situations where one of the partners (the computer) has access to only part of the channels. These experiments will provide corpora of multimodal interaction, which are used to build models of all stages of intention comprehension, to develop full-duplex interaction systems that process foreground and back channel information at the same time. At the input side speech and hand-writing recognition will be enhanced with simultaneous processing of paralinguistic information and 3D pen-based gestures. Recognition performance will be enhanced by cross-channel information ex-change and constant feedback from the central modules of the system about the most probable next move in the dialogue. The corpora will also be used to develop dynamic models of facial expressions, which will be used at the output side for animation of a display agent, along with task-related graphical and text information. For the generation of actual output we will pursue a Unit Selection approach, that has been successful in speech synthesis and that also has roots in psycholinguistics. Experiments are performed to investigate human comprehension of synthesised facial expressions and transitory visual information. New models will be developed of the formation of beliefs and the comprehension of emerging intentions in interactions within specific domains, such as route planning, bathroom re-decoration, etc.

Leaflet | Map data © OpenStreetMap contributors, Credit: EC-GISCO, © EuroGeographics for the administrative boundaries

Coordinator

MAX-PLANCK-GESELLSCHAFT ZUR FOERDERUNG DER WISSENSCHAFT VERTRETEN DURCH DAS MAX-PLANCK-INSTITUT FUER PSYCHOLINGUISTIK

Address

Wundtlaan 1
6500 Ah Nijmegen

Netherlands

Administrative Contact

Willem LEVELT

Participants (6)

Sort alphabetically

Expand all

DEUTSCHES FORSCHUNGSZENTRUM FUER KUENSTLICHE INTELLIGENZ GMBH

Germany

MAX-PLANCK GESELLSCHAFT ZUR FOERDERUNG DER WISSENSCHAFTEN E.V.

Germany

STICHTING KATHOLIEKE UNIVERSITEIT

Netherlands

THE UNIVERSITY OF EDINBURGH

United Kingdom

THE UNIVERSITY OF SHEFFIELD

United Kingdom

VISOFT GBR, OTTMAR WEBER & RAINER NISSLER

Germany

Project information

Grant agreement ID: IST-2001-32311

  • Start date

    1 March 2002

  • End date

    28 February 2005

Funded under:

FP5-IST

  • Overall budget:

    € 4 357 674

  • EU contribution

    € 3 497 742

Coordinated by:

MAX-PLANCK-GESELLSCHAFT ZUR FOERDERUNG DER WISSENSCHAFT VERTRETEN DURCH DAS MAX-PLANCK-INSTITUT FUER PSYCHOLINGUISTIK

Netherlands