Periodic Reporting for period 2 - iNavigate (Brain-inspired technologies for intelligent navigation and mobility)
Reporting period: 2023-06-01 to 2025-05-31
Towards Objective O2.1 ‘Provide a mechanistic insight into neural network computations that drive navigation’, efforts were made to improve recording and analysis capabilities. Secondees helped improve Neuropixel recording technology for use in freely moving mice, co-authored https://doi.org/10.1126/sciadv.abq8657(opens in new window) built a new electrode for measuring neuromodulation, worked on a new test bed for EEG recordings during human anticipation, and designed a lab-based experiment aligned with DÜS to measure brain activity using fNIRS in participants that navigate obstacles. In the context of fish work, secondees learned about the link of fish behavior to brain activity recorded with calcium imaging and electron microscopy and evaluated these fish data for translation into navigation algorithms to be tested in autonomous robots.
Towards Objectives O3.1 'Approximate the network computations that drive animal navigation' and O3.2 'Define the influence of individual nodes in the network to the network computations', efforts were made to develop a new bio-inspired (human gaze-inspired) image processing architecture based on DÜS. The architecture has two Artificial Neural Network (ANN) models, one that takes a camera frame as input and predicts gaze fixation positions and another which predicts the participants’ next move. In parallel, a virtual crowd simulator was developed in Unity and integrated with Python to recreate the DÜS study in silico. A reinforcement learning engine based on a Deep Q-Network has been implemented which employs sparse episodic rewards and greedy action selection methods to facilitate efficient learning and exploration. So far, the preliminary results are encouraging, indicating that the engine can train a blank slate navigator model with random initial weights to effectively guide the simulated agent toward the goal while avoiding collisions with the crowd.
Towards Objective O4.1 ‘Implementation of brain-inspired control algorithms in robotic instruments’, the first universal controller was implemented on a TurtleBot, fitted with an Asus Xtion Pro Live depth-sensing camera. The image-processing software was trained on the Coco dataset, so that it could recognize up to 91 different types of everyday objects, ranging from chairs to toys to people. A universal controller was implemented and programmed to alternate its navigation strategies between "tactile" and "investigate" modes. In “tactile” mode, the robot would walk around the arena while looking for objects it recognized, mapping out the dimensions of the arena using odometry and tactile feedback. When it discovered a familiar object in a novel place, the controller would switch into "investigate" mode, and the robot would approach the object to localize it and add it to its internal map. Other levels of complexity included new objects partially or fully obscuring previously mapped objects. In the latter case, the vision system erred to create one giant combined object. The second demonstrator is based on the XGO mini platform. These small quadruped robots are equipped with a Raspberry Pi CM4 computer, a built-in 5Mpixel camera, four three-jointed legs, and a four-jointed arm/gripper. Their software library contains a set of demonstrator programs to perform predetermined movements, such as standing, reaching, sitting, and walking. A first draft of a universal controller has been implemented, allowing the robot to switch between hunting for and tracking up flow.