CORDIS - EU research results
CORDIS

MultiModal Mall Entertainment Robot

Periodic Reporting for period 3 - MuMMER (MultiModal Mall Entertainment Robot)

Reporting period: 2018-09-01 to 2020-02-29

In the MuMMER project (MultiModal Mall Entertainment Robot), we have developed a socially intelligent interactive robot designed to interact with the general public in open spaces, using SoftBank Robotics' Pepper humanoid robot as the primary platform. The MuMMER system provides an entertaining and engaging experience when interacting with the general public in a public shopping mall. Crucially, our robot exhibits behaviour that is socially appropriate and engaging by combining speech-based conversational interaction with non-verbal communication and motion planning. To support this behaviour, we have developed and integrated new methods from audiovisual scene processing, social-signal processing, conversational AI, perspective taking, and geometric reasoning. Throughout the project, the robot was deployed in Ideapark, a large public shopping mall in Finland: initially for short visits to aid in collaborative scenario development, co-design, and system evaluation, and later for a long-term field study in the 4th year of the project.

The objectives of the project included:
1. Developing an interactive robot for entertainment applications.
2. Involving stakeholders throughout the project in a co-design process.
3. Allowing the robot to perceive the world through its own built-in sensors.
4. Automatically learning strategies for the robot to interact with humans.
5. Moving and navigating safely and naturally in a crowded public space.
6. Developing new business models and opportunities for socially interactive robots in public spaces.

The results of MuMMER include:
- A co-designed interactive mobile robot with entertainment features and behaviours that is able to interact naturally with humans in a public space.
- A set of concrete, detailed, tested use and business scenarios for a mobile entertainment robot in a shopping mall.
- A set of success criteria and evaluation strategies designed to evaluate the success of the robot in its designated tasks.
- A set of publicly available, reusable, state-of-the-art components for audiovisual scene processing, social signal processing, high-level action selection, and human-aware robot navigation.
In the first period, we made progress on many fronts. We held the first workshops and focus groups with customers, retailers, and management at Ideapark. Workshop results were used to specify an initial target scenario for the robot system, and to develop initial versions of the acceptance questionnaire to be used to track user reactions to the robot throughout the project.

All partners received the Pepper robot in June 2016. All technical partners then developed initial versions of the components which will combine to create the MuMMER system, and these were integrated into an initial interactive system that supports the target scenario identified by the co-design process. The Pepper robot hardware was evaluated in the context of the project needs, and a concrete plan developed for hardware and software updates to be made to Pepper to allow it to fully support the project research goals.

During Period 2, we identified and refined a concrete scenario relevant to the mall situation which supports the integration of state-of-the-art research from all partners. The scenario is based around guidance – i.e. helping users to find locations in the mall – but also includes aspects of interactive chat and entertainment. We carried out a human-human study to assess how the current mall guides carry out guidance tasks, and implemented a version of the MuMMER system that supports this guidance scenario, deploying it in the mall. Significant effort was made to develop a modified version of Pepper with hardware suitable for the mall environment.

We carried out regular studies in the mall measuring user acceptance of the robot, involving several hundred participants.The primary focus in WP2 was on updating and extending the perception components to support more robust and informative perception modules. In WP3, we developed and evaluated components for generating non-verbal behaviour of the robot designed to produce particular social effects on the user. In WP4, we developed a new dialogue system called Alana, a scalable and highly customizable open-domain dialogue system comprised of several interchangeable components, combined into 4 basic modules. Work in WP5 concentrated on enhancing and integrating all the building blocks involved in the navigation and localisation tasks. In WP8, we outlined four possible use scenarios for a MuMMER-like robot system and discussed possible business advantages of each scenario.

During the final reporting period, development continued on all technical components, resulting in a final set of state-of-the-art components in all areas. However, beyond this, during this period the project achieved its final goal of a long-term deployment in the mall. Specific achievements in this area include the following:
1. A modified version of the Pepper robot, with hardware suitable for the mall environment, was delivered to all partners
2. A final scenario integrating guidance with social chat was developed, and software supporting the guidance scenario was developed by the partners and integrated on the target robot platform.
3. The final, integrated robot system was deployed in the target environment over a 14-week period from September 2019 through January 2020.
We have developed a modular architecture to allow state-of-the-art software components to work together to support socially intelligent interaction with the MuMMER robot, and have also integrated all components into the architecture. These components address tasks such as audiovisual processing, social signal processing, interaction management and action selection, and interactive navigation and motion planning -- all of which are necessary to support the overall task of socially intelligent and engaging interaction with a robot in a public space. With these advanced software components operating together on a modified version of the Pepper robot, the most widely available current social robot platform, the potential impact is large: a robot that is able to support this sort of socially intelligent interaction can be deployed in a wide variety of public contexts. In addition, as part of the co-design process, we have also developed a set of metrics to evaluate the success of a socially interactive public-space robot. As such robots are more widely deployed, these metrics will play a crucial role in assessing the performance of the robots in various contexts.

At the end of the project, the following represent the progress beyond the state of the art:
1. A prototype modified version of the Pepper robot, developed by SoftBank for the purposes of the MuMMER project.
2. Software for audiovisual sensing and person tracking.
3. Software for social signal recognition, and social signal generation involving novel non-verbal behaviours of the robot.
4. A conversational system that supports multithreaded dialogue on a range of topics, and novel neural techniques for Natural Language Understanding suitable for spoken HRI.
5. A suite of components suitable for human-aware navigation and guidance in the shopping mall context.
6. Recordings and user evaluations from a long-term, autonomous robot deployment in the shopping mall for a total of 49 days over 14 weeks.
Pepper Robot at Carrefour (© 2016 SoftBank Robotics Europe)
Ideapark shopping mall -- image from Wikimedia Commons (public domain)