Skip to main content
European Commission logo print header

Safe Robot Navigation in Dense Crowds

Periodic Reporting for period 3 - CROWDBOT (Safe Robot Navigation in Dense Crowds)

Okres sprawozdawczy: 2020-07-01 do 2021-12-31

The main goal of the CROWDBOT project is the demonstration of safe navigation of robots among dense human crowds. Three keywords stand out in understanding the challenges involved in reaching this goal: safety, navigation and human crowds. A robot can be operated safely by moving at a slow gentle pace. Due to its cautious maneuvering it tends to freeze whenever it senses contact or collision. To lessen impact, its body frame can be compliant and made of soft materials to prevent bodily harm during contact. However, in CROWDBOT we place no such requirements for motion or body design on robots. Any robotic platform, big and small, heavy and light, must be able to navigate safely at the same pace as the human crowd it is roaming amongst. As for navigation, in its simplest form, it is movement from start to finish point. A majority of robots in the marketplace navigate using several aids such as pre-loaded maps, GPS or similar localization solutions, wireless communication for real-time interaction and remote monitoring. In CROWDBOT we assume all our robots are self-sufficient and autonomous: no pre-loaded map nor GPS-type navigation aid and no external communication options. The robot must rely on its own sensors, processors and intelligent algorithms for mapping, localization and motion/path planning. It is true that additional navigation aids will provide redundancy and robustness to the system but the goal of CROWDBOT is to demonstrate self-relying navigation without such aids.
As a unique CROWDBOT feature beyond the state-of-the-art, our bots will be able to sense, track and perceive the behavior of human crowds in its vicinity. Color-vision and depth sensors plus LIDAR and other proximity sensor data are fused and interpreted in real time to better understand individual and group behavior, speed, acceleration, direction and related motion profiles of human crowds. The robots are then able to distinguish idle/standing/sitting humans from those in motion and of course, also identify non-human physical objects and other obstacles.
The technical work and achievements of CROWDBOT are important to society for two different reasons: First, the project propels the advancement of mobile robotics, computer vision and the application and exploitation of related sensor technologies. CROWDBOT technologies are not limited to the field of robotics; it applies to other autonomous systems such as driverless cars, unmanned aircraft and even Internet-of-Things devices. Second, research activities and prototype demonstrations in the robotics community have expanded at a very fast pace in the past few years, largely driven by public fascination of robots, advances in artificial intelligence technology and availability of funding from various commercial investors. We now see robots roaming in public places, office environments, industrial warehouses and even restaurants where they perform as food preparers or servers. What is lacking so far are guidelines, operational procedures and best practices for safe navigation of such robots in such environments. Along with navigation, safety is one of the two main objectives of CROWDBOT. The team will develop the necessary framework that will eventually lead to standardization and certification of any robotic platform for safe operation among human crowds in private or public environment.
Under the umbrella objective of safe navigation among crowds, there are five specific technical objectives aimed for in CROWDBOT: 1) Sensing the crowd around the robot 2) Predicting crowd motion in the short-term 3) Navigating safely and efficiently in the crowd 4) Create new set of tools to evaluate risks of navigation within crowds and 5) Deliver safety, ethical and legal recommendations for robot navigation in public environments.
CROWDBOT project duration is 3.5 years (42 months). Year 1 can be viewed as “Planning & Prototype” phase whereas Years 2 and 3 are “Develop-Test-Fix” phases. For Year 1, the team has made various planning and preparation for smooth execution of robotic navigation tests in Year 2 and 3. We have developed operational scenarios for testing, prepared a requirements documents and begun stakeholder engagements to solicit input from outside experts and user communities. We have scoped out possible venues and required material resources to hold robotic tests with the presence of large human crowds. On the technology side, we have selected four different robotic platforms (electric wheelchair, humanoid, service robot and mobility solution) as baseline models to test out our navigation technologies. The fourth type of robot was added during the project to complete the first three. Each platform has been augmented with additional sensors and processors in anticipation of new technologies and intelligent features we wish to test out on these bots. In November 2018 we conducted a limited robotic-crowd test using a wheelchair equipped with color-depth sensors to detect and track crowd behavior. On the Pepper humanoid additional vision and inertial sensors are augmented to test out autonomous localization and mapping features. The third bot, the Cuybot, has also passed the initial prototype phase and is now ready for additional sensor and processor augmentation. The crowd tracker (using color-depth sensing) is operational and functioning well and will undergo further testing in the next few months. In terms of progress in the area of robotic safety and ethics, the team has completed a study of risks and ethical issues that may result from operation of robots among human crowds. The team also presented its findings and solicited feedback from external advisors that constitute the Ethics & Safety Advisory Board.
Several new technologies beyond the state-of-the-art are being introduced:
1) Color-Depth sensing via Intel’s Realsense RGBD sensor: Sensing of the environment using a combination of color and depth image data is not new. What’s new in CROWDBOT is the use of such technology, along with a tracking algorithm to identify and track human crowd behavior and motion profile within its sensor field of view.
2) Joint Vision-Inertia sensor for localization: A robot’s current position as well as its movement/positional change is computed via tracking changes in the markings in the ceiling.
3) Fusing of vision, proximity and contact sensors for fine-grain awareness of near-field environment, which allows the robot is maneuver reactively against obstacles and avoid collision with humans.
4) Fusing of contact, force and pressure sensors to detect touch, continuous contact, collision and force involved during contact.
5) A simulation tool suite that models, simulates and analyzes both robot and human crowd motion profiles; a virtual-reality based environment is developed for real-time immersive experience for humans among robots and crowds
6) Since we are using three uniquely different robotic platforms as baseline models, our computer vision, sensing and navigation technologies are transferable to other robotic platforms with little or no modification required.
CROWDBOT baseline robot prototypes