COMIC combines research in software and systems engineering, with human-human and human-computer interaction, cognitive psychology, and Human Factors to enable novel 3G mobile eCommerce and eWork services, in which computers, accessed from small portable terminals, act as a partner in collaborative problem solving. COMIC's R&D will contribute to safer and healthier working environments, strengthen Europe's leading position in mobile telecom services and thereby enlarge employment. To realise the long-term vision of an ambient intelligence landscape in which artificial agents are able to understand emerging intentions in mixed-initiative natural language conversations with a customer, COMIC will build subsequently more capable demonstrator systems, which are then used for Human Factors studies.
COMIC has two main objectives, viz
1. Harness knowledge from Cognitive Psychology to develop software and tools needed to create novel eCommerce and eWork services, especially for small mobile multimodal terminals. This R&D will result in healthier and safer and more pleasant working conditions. By doing so, COMIC will strengthen Europe's leading position in the field of mobile telecom services;
2. Show the usability of the results for a range of novel services by a series of subsequently more powerful demonstrators that shows opportunities for novel eWork and eCommerce services. The demonstrators will be used to carry out Human Factors studies.
DESCRIPTION OF WORK
COMIC will combine extensive knowledge and expertise in Cognitive Psychology, in software and system development to design and perform experiments with human-human and human-computer interaction in language-centric multimodal environments. The experiments will be based on scenarios that can be tightly controlled, but that at the same time are relevant for eCommerce and eWork applications, in asymmetric situations where one of the partners (the computer) has access to only part of the channels. These experiments will provide corpora of multimodal interaction, which are used to build models of all stages of intention comprehension, to develop full-duplex interaction systems that process foreground and back channel information at the same time. At the input side speech and hand-writing recognition will be enhanced with simultaneous processing of paralinguistic information and 3D pen-based gestures. Recognition performance will be enhanced by cross-channel information ex-change and constant feedback from the central modules of the system about the most probable next move in the dialogue. The corpora will also be used to develop dynamic models of facial expressions, which will be used at the output side for animation of a display agent, along with task-related graphical and text information. For the generation of actual output we will pursue a Unit Selection approach, that has been successful in speech synthesis and that also has roots in psycholinguistics. Experiments are performed to investigate human comprehension of synthesised facial expressions and transitory visual information. New models will be developed of the formation of beliefs and the comprehension of emerging intentions in interactions within specific domains, such as route planning, bathroom re-decoration, etc.
Funding SchemeCSC - Cost-sharing contracts
6525 EZ Nijmegen
EH8 9YL Edinburgh
S10 2TN Sheffield