Skip to main content
Go to the home page of the European Commission (opens in new window)
English English
CORDIS - EU research results
CORDIS

Article Category

Content archived on 2023-03-01

Article available in the following languages:

EN

Chatting freely with animated historical characters

Once upon a time, there lived the great Danish storyteller, Hans Christian Andersen. Today, aided by computers, a virtual Andersen is entertaining today's youngsters in his home town of Odense. His natural and interactive communication talent has aroused the interest of the education and gaming industries.

Walk into the Hans Christian Andersen museum and you might see and hear the man himself. Though only virtual, he can hold visitor's attention for up to 15 minutes, chatting with them about himself and telling his fairy-tales. He is the fruit of NICE, an IST project which has developed software enabling dialogue with animated characters. The project partners created two animated characters for museums in Odense and Stockholm. Visitors can have speech and gesture conversations with the legendary fairy-tale author or play spoken computer games with a character called Cloddy Hans. According to project coordinator Niels Ole Bernsen, most speech systems are task oriented. "They typically read out emails or train timetables. Our project built a more responsive system, we call it domain-oriented, mimicking the way humans talk and interact." Recreating Hans Christian Andersen Thanks some 600 output templates and primitives, the NICE system recreates the original author's personality. It also enables Andersen to chat with others about his life and stories, or to respond correctly to both verbal and non-verbal input. For example, he can make gestures or facial expressions in line with visitor's remarks or questions, whether in Swedish and English. The Andersen character was designed for an entertaining museum setting, where input and output errors made by the character and visitors are not important and where users will spend no more than 15 minutes conversing with him. A typical PC game will contain up to 30 hours of carefully programmed content. Nevertheless, according to Bernsen, the system's ability to link spoken conversation with 2D input gestures in a 3D dynamic graphics virtual world is sure to interest the education and gaming industries. "To move ahead with commercialisation, we will demonstrate our system to interested parties. We will also measure how long it takes to port the system to other historic animated characters, such as scientist Sir Isaac Newton. If we can do this work quickly, we may even complete it after the project officially ends." Of commercial interest Computer games companies have expressed interest in the system's abilities to recognise natural language and to manage an entire conversation through words or gestures. "Language understanding underpins our system," says Bernsen. "Andersen can understand whether peoples remarks are insulting, irrelevant or comprehensible, then respond appropriately. All dialogue systems need a management module like this." Games companies who have seen the project's animated characters are also impressed by the fact the system actually works, says Morgan Freeman, of project partner Liquid Media. "NICE developed a system that offers users genuine interaction with characters," he adds. "It also generates far richer responses from the character than comparable PC games can do, because you can ask the character questions outside of its knowledge domain." Freeman notes that the next generation of PCs and games consoles will offer huge graphics and content. But to make the most of these features without hiring many more developers, games companies will increasingly need to generate content and characters in a smart fashion. A task perfectly suited for natural and interactive communication systems such as NICE's. Also of note are the NICE's speech recognisers, based on acoustic data collected from Swedish and English-speaking children from 30 nations. This acoustic data will be made available on the project website from March 2005, and may be sold or traded commercially. The recognisers can recognise a child's speech with reasonable accuracy, a feature lacking in traditional speech systems. "We discovered children like to tell Andersen about their lives and consumer goods such as mobile phones. The real Andersen loved technology, so we programmed our character to respond to what they tell him," says Bernsen. The partners have also endowed their character with more knowledge than is contained in the system. "If the kids ask the writer what he knows about cars, for instance, a special module searches the Internet for information on the subject," says Bernsen. "He will then reply using some of that information. By integrating even more information, whatever Andersen learns from people he talks to, we could create almost seamless conversations." This feature has great potential for the education and entertainment sectors, notes the coordinator. Other goals for the partners are to see if their platform could be adapted for other languages or historic figures. They also hope to create a Web-integrated Andersen who can learn from visitors and 'see' them. This would result in a highly immersive system, in which people would feel they could truly interact with characters from the past. Contact:,Niels Ole Bernsen,University of Southern Denmark,Natural Interactive Systems Laboratory,Campusvej 55,DK-5230 Odense M,Denmark,Tel: +45-6550 3544,Fax: +45-6550 3849,Email: nob@nis.sdu.dkPublished by the IST Results service which brings you online ICT news and analysis on the emerging results from the European Commission's Information Society Technologies research initiative. The service reports on prototype products and services ready for commercialisation as well as work in progress and interim results with significant potential for exploitation.

Countries

Denmark

My booklet 0 0