Periodic Reporting for period 4 - NeuroAgents (Neuromorphic Electronic Agents: from sensory processing to autonomous cognitive behavior)
Reporting period: 2022-03-01 to 2023-08-31
Conventional and Artificial Intelligence (AI) computing technologies are affecting most aspects of our daily life. In particular, AI algorithms are extremely successful in extracting information from large amounts of data and producing impressive results in natural language processing, image processing and many other tasks. Still, a wide range of important problems remains unsolved or is even unsolvable, using AI algorithms and the computing technologies that support them: these are mostly problems that require active and close-loop interactions with the environment, with fast sensory-motor processing and decision making.
As more and more sensing technologies are becoming available to enable real-time sensory-processing of real-world data and interactions with the environment, understanding how to build autonomous electronic agents that can solve these problems, using inspiration from the brain and computational neuroscience methods, is an important goal. This was the main objective of the "NeuroAgents" project.
To accomplish this objective, we combined work on machine learning, spiking neural networks, and mixed-signal electronic circuits.
In doing so, we developed neuromorphic computing technologies that express robust cognitive behavior while interacting with the environment.
For the "mind" theme, we developed signal processing and computational models, validated with software simulations of spiking neural networks.
We used these models to understand how to process natural signals using noisy and variable computing elements, such as the silicon neurons and silicon synapses implemented in our mixed-signal neuromorphic VLSI chips.
In particular, we used population coding representation of signals and "variables" by means of Winner-Take-All (WTA) networks, and found ways to relate different variables among each other (for example to relate eye-coordinate angles "A" with head-coordinate ones "B", with respect to the coordinates of a target in visual space "C").
We showed how a simple relationship such as "A+B=C" or "A=C-B", where A, B, and C are noisy signals that can change continuously in real-time can be computed on-line by a neural processor that comprises 3 WTA networks bidirectionally coupled via an intermediate population of hidden neurons. This 3-way network of relations (NoR) is an extremely versatile computational primitive as it allows processing both continuous-valued sensory signals (e.g. measured from a silicon retina) and abstract variables or symbols (e.g. provided by a robotic arm motor encoder). \
We demonstrated the validity of this approach with a network, implemented on a neuromorphic processor chip, that was used to control gaze and head position of the iCub humanoid robot.
In the "brain" theme, in addition to developing the framework for configuring neuromorphic processor chips we fabricated a new chip (DYNAP-SE2) which significantly extends the state-of-the-art.
The DYNAP-SE2 is a mixed-signal Spiking Neuronal Network (SNN) Processor. Based on design principles taken from biological nervous systems, it uses analog signal processing and digital event-based asynchronous communication scheme which ensures very low latency.
The chip has a clock free design and runs in native real-time. The real-time nature of processing with asynchronous circuits combined with weak inversion analog circuit design methods provides the DYNAP-SE2 with very high energy efficiency.
Each DYNAP-SE2 chip has 1024 neurons distributed over 4 individually configurable Neural Cores, connected by a hierarchical routing grid. A DYNAP-SE2 chip can be connected to other 7x7 ones in a modular fashion, thus supporting networks of up to 230k all-to-all connected neurons.
We demonstrated the usefulness of this chip in a clinically relevant task, to detect epilepsy markers (i.e. high-frequency oscillations -- HFOs) that can help identify the epileptogenic zone in patients that need to undergo surgery.
The "body" theme was also very successful: we demonstrated areal-time depth perception using a setup with two silicon retina vision sensors connected to a DYNAP-SE chip. This involved building printed circuit boards interfaced with Field-Programmable Gate Array (FPGA) devices, programming the firmware on the FPGA, and developing the code to create, transmit and process the spike trains produced by both neuromorphic sensors and processors.
In this active-vision binocular stereo setup the two neuromorphic cameras were separated by a baseline distance similar to the pupillary distance of humans and sent spike events separately from both retinas to an FPGA. The FPGA sampled the events from both sensors and produced a single output stream that preserves the temporal information from both sensors.
This information was then processed by a spiking neural network model of stereo-correspondence implemented in neuromorphic hardware. This setup allowed us to validate neuroscience hypotheses and to demonstrate active vision in a complex multi-chip setup.
From the "brain" theme, we showed substantial progress beyond the state-of-the-art by designing a working full-custom mixed-signal neuromorphic processor and demonstrating its application in bio-signal classification tasks and clinically relevant medical applications.
The "body" theme has produced an active vision stereo set-up that combines neuromorphic vision sensors and neuromorphic processors to test and validate (or invalidate) neuro-biological models of stereo perception. This is a highly interdisciplinary effort that combines multiple fields. Collected data from the setup has demonstrated how we can build complex multi-chip neuromorphic setups that work reliably and robustly in vision tasks.
These technologies and demonstrations developed within this project are important for society as they can enable applications that range from ultra-low power biomedical signal processing, to environmental monitoring, to complex tasks involving human-robot collaborations.