Skip to main content
Go to the home page of the European Commission (opens in new window)
English English
CORDIS - EU research results
CORDIS

Brain-inspired technologies for intelligent navigation and mobility

Periodic Reporting for period 2 - iNavigate (Brain-inspired technologies for intelligent navigation and mobility)

Reporting period: 2023-06-01 to 2025-05-31

When sensory information is uncertain or surprising, autonomous machines often fail, with sometimes fatal consequences. By contrast, humans and animals can adapt to sensory uncertainty or surprises, at least when they are healthy and/or sensorily unimpaired. Therefore, the goal of iNavigate is to infer the principles of surprise-based navigation from biological systems, specifically, humans, rodents, and fish, and to translate these principles into algorithms that can be implemented and tested in mobile robots and bionic devices, thereby providing novel technological solutions for autonomous robotic mobility. To achieve this goal, academic and non-academic partners consisting of behavioral scientists, neuroscientists, computer scientists, and roboticists, residing in the EU/AC and the USA, conducted joint research on intelligent navigation.  We conclude that it is possible to infer the principles of navigation from biological systems and that computer algorithms can simulate aspects of these, with different degrees of accuracy for different conditions.
Objective O1.1 was to identify how organisms across different evolutionary scales adapt to sensory uncertainty during navigation. To study humans, we designed ‘The Downtown Üsküdar Study’ or DÜS – named after the location in Istanbul where data collection is taking place – with the goal to extract the best predictors of successful navigation in a real-world environment that harbors sensory uncertainty. Preliminary analyses revealed gender differences in crowd navigation, with implications for the design of robots (simulating female or male biology) [Haleem, B.A. Aydoğan, Y., Sever, F.Z. Alomar, A., Temiz, K.O. Akanyeti, O., Schulz, D. (2025). Gender differences in crowd navigation. ISBCS, Istanbul, Turkey.]. Among the many devices we are using to measure navigation is a lightweight, wearable lux meter which makes use of Acrome’s SMD Ambient Light Sensor Module to measure illumination in real-time. To be able to synchronize real-time data recorded from different devices, we developed a framework to align video streams using audio-based methods and IMU data using an entropy-based approach [Aydoğan, Y., Haleem, B., Schulz, D., Akanyeti, O. (2025). doi: 10.1109/DSP65409.2025.11074868]. Rodent movements were tracked using DeepLabCut and an advanced motion tracking method was developed to estimate the skeleton in every video frame of a trial (https://doi.org/10.1101/2024.09.10.612344(opens in new window)). Custom scientific equipment was built, such as a novel rat head fixation system that enables head immobilization for brain recordings while the rat navigates on a treadmill. A custom miniature speed-sensing system was developed for use with fish in aquatic environments. Initial results show that fish adjust their school structure in 3D based on group size and flow speed [Liao, J. and Mukherjee, I. (2025). The effect of flow conditions and group size on the 3-D dynamics of a schooling carangid fish. SICB, Atlanta, USA.]. We will upload our datasets as they become available at: https://zenodo.org/communities/inavigate(opens in new window).
Towards Objective O2.1 ‘Provide a mechanistic insight into neural network computations that drive navigation’, efforts were made to improve recording and analysis capabilities. Secondees helped improve Neuropixel recording technology for use in freely moving mice, co-authored https://doi.org/10.1126/sciadv.abq8657(opens in new window) built a new electrode for measuring neuromodulation, worked on a new test bed for EEG recordings during human anticipation, and designed a lab-based experiment aligned with DÜS to measure brain activity using fNIRS in participants that navigate obstacles. In the context of fish work, secondees learned about the link of fish behavior to brain activity recorded with calcium imaging and electron microscopy and evaluated these fish data for translation into navigation algorithms to be tested in autonomous robots.
Towards Objectives O3.1 'Approximate the network computations that drive animal navigation' and O3.2 'Define the influence of individual nodes in the network to the network computations', efforts were made to develop a new bio-inspired (human gaze-inspired) image processing architecture based on DÜS. The architecture has two Artificial Neural Network (ANN) models, one that takes a camera frame as input and predicts gaze fixation positions and another which predicts the participants’ next move. In parallel, a virtual crowd simulator was developed in Unity and integrated with Python to recreate the DÜS study in silico. A reinforcement learning engine based on a Deep Q-Network has been implemented which employs sparse episodic rewards and greedy action selection methods to facilitate efficient learning and exploration. So far, the preliminary results are encouraging, indicating that the engine can train a blank slate navigator model with random initial weights to effectively guide the simulated agent toward the goal while avoiding collisions with the crowd.
Towards Objective O4.1 ‘Implementation of brain-inspired control algorithms in robotic instruments’, the first universal controller was implemented on a TurtleBot, fitted with an Asus Xtion Pro Live depth-sensing camera. The image-processing software was trained on the Coco dataset, so that it could recognize up to 91 different types of everyday objects, ranging from chairs to toys to people. A universal controller was implemented and programmed to alternate its navigation strategies between "tactile" and "investigate" modes. In “tactile” mode, the robot would walk around the arena while looking for objects it recognized, mapping out the dimensions of the arena using odometry and tactile feedback. When it discovered a familiar object in a novel place, the controller would switch into "investigate" mode, and the robot would approach the object to localize it and add it to its internal map. Other levels of complexity included new objects partially or fully obscuring previously mapped objects. In the latter case, the vision system erred to create one giant combined object. The second demonstrator is based on the XGO mini platform. These small quadruped robots are equipped with a Raspberry Pi CM4 computer, a built-in 5Mpixel camera, four three-jointed legs, and a four-jointed arm/gripper. Their software library contains a set of demonstrator programs to perform predetermined movements, such as standing, reaching, sitting, and walking. A first draft of a universal controller has been implemented, allowing the robot to switch between hunting for and tracking up flow.
We have launched several major research lines under iNavigate, including a human study which aims to identify the features of surprise-based navigation in a crowded environment and to translate these features into computer algorithms. We have further built behavior tracking and brain recording setups for use with rodents and fish, and built robotic platforms to test universal controllers. We have formed networks between academic and non-academic organizations which allowed us to take on new and different perspectives on all matters innovation. iNavigate has contributed to skillset development, including core research skills, but also know-how of research ethics, intellectual property, commercialization, entrepreneurship and business plan development, as well as social impact awareness, public speaking and scientific/technical writing.
inavigate-banner.png
My booklet 0 0