Skip to main content

Neuromorphic Electronic Agents: from sensory processing to autonomous cognitive behavior

Periodic Reporting for period 2 - NeuroAgents (Neuromorphic Electronic Agents: from sensory processing to autonomous cognitive behavior)

Reporting period: 2019-03-01 to 2020-08-31

There has been tremendous progress in artificial neural networks and computational neuroscience. However, today’s computers are still not able to compete with biological ones in many tasks.
Understanding how to build computing systems that can process sensory signals using the same computational principles of the brain will lead to the creation of a radically different computing technology that can have a huge impact on society.
In this project, we will combine recent advancements in machine learning with the latest developments in neuromorphic computing to design autonomous systems that can express robust cognitive behavior while interacting with the environment.
"Our work-plan follows three main research lines tightly interlinked: theory of neural computation (the ""mind"" research theme), neuromorphic electronic circuits and systems design (the ""brain"" research theme) and active sensing and processing on robotic or actuated mechanical platforms (the ""body"" research theme).

For the ""mind"" theme, we have developed signal processing and computational models, validated with software simulations of spiking neural networks, for processing natural signals using noisy and variable computing elements, such as the silicon neurons and silicon synapses implemented in our mixed-signal neuromorphic VLSI chips. In particular, we have been investigating population coding representation of signals by means of Winner-Take-All (WTA) networks, and ways to relate different variables among each other (for example to relate eye-coordinate angles ""A"" with head-coordinate ones ""B"", with respect to the coordinates of a target in visual space ""C""). A simple relationship such as ""A+B=C"" or ""A=C-B"", where A, B, and C are noisy signals that can change continuously in real-time can be computed on-line by a neural processor that comprises 3 WTA networks bidirectionally coupled via an intermediate population of hidden neurons. This 3-way network of relations (NoR) is an extremely versatile computational primitive as it allows processing both continuous-valued sensory signals (e.g. measured from a silicon retina) and abstract variables or symbols (e.g. provided by a robotic arm motor encoder). In addition to the theoretical and software developments, we validated the model using existing neuromorphic processor chips in our lab that had been designed within the context of the previous NeuroP ERC starting grant. This work, that had already been demonstrated in a live demo running in real-time on neuromorphic hardware at the 2018 Cognitive Computing Workshop, has been further extended with data analysis to quantify its precision (e.g. on the number of levels or values that the A, B, C variables can support), its robustness to noise, and its ""cue integration"" and ""inference"" properties in case of partial or missing information in the population coding representation of one or more variable values.

The ""brain"" theme has been very successful: in addition to developing the SW framework for configuring neuromorphic processor chips and doing experiments with them, a new neuromorphic processor chip (DYNAP-SE2) was designed, significantly extending the state-of-the-art, represented by the previous generation (DYNAP-SE) fabricated during the NeuroP project.
The DYNAP-SE2 is a mixed-signal Spiking Neuronal Network (SNN) Processor. Based on design principles taken from biological nervous systems, it uses analog signal processing and digital event-based asynchronous communication scheme which ensures very low latency.
The chip has no digital clock (it has a clock free design) and runs in native real-time. The real-time nature of processing with asynchronous circuits combined with weak inversion analog circuit design methods provides the DYNAP-SE2 with very high energy efficiency.
Each DYNAP-SE2 chip has 1024 neurons distributed over 4 individually configurable Neural Cores, connected by a hierarchical routing grid. A DYNAP-SE2 chip can be connected to other 7x7 ones in a modular fashion, thus supporting networks of up to 230k all-to-all connected neurons.
The chip was returned from fabrication and it is currently being characterized to see if all circuits work as expected.

The ""body"" theme has also been very active and successful: an interface between the DYNAP-SE chip with silicon retina vision sensors (Dynamic Vision Sensors and ICub encoders) and motors (e.g. robotic arm) was developed and successfully tested. This involved building printed circuit boards interfaced with Field-Programmable Gate Array (FPGA) devices, programming the firmware on the FPGA, and developing the code to create, transmit and process the spike trains produced by both neuromorphic sensors and processors. The implementation of the hardware interface was successfully completed and we are currently focusing on emulating the computational building block of the architecture, i.e. the 3-way relational network, on the neuromorphic actuated platform.
In addition, we developed an active-vision binocular stereo setup with two DAVIS silicon retina vision sensors controlled by pan-tilt units interfaces to multiple neuromorphic processors. In this setup, the two neuromorphic cameras are separated by a baseline distance similar to the pupillary distance of humans and send spike events separately from both retinas to an FPGA. The FPGA samples the events from both sensors and multiplexes them into a single output stream that preserves the temporal information. The temporal information from both sensors is then processed by a spiking neural network model of stereo-correspondence implemented in neuromorphic hardware. Current results show that the model can be used to distinguish objects in the stereo-disparity plane, closer to that, or further from that. The goal is to activate the motors and close the loop in a way to control vergence movements and focus the cameras in a way to have the objects at the zero-disparity plane."
"So far, the progress beyond the state-of-the-art in the ""mind"" theme is represented by the first even NoR implemented on full custom mixed-signal neuromorphic hardware and its demonstration with a live real-time demo working robustly. The progress made with the framework developed to configure the setup, carry out control experiments, and do on-line data analysis, will allow us to apply such systems to the control of robotic actuators (e.g. robot arm for reaching to a desired position/angle).

From the ""brain"" theme, there was substantial progress done in the circuit and chip design activities. The circuits and systems implemented in the DYNAP-SE2 represent a level of complexity in multi-core mixed-signal neuromorphic circuit design never achieved so far. The activities are still in the circuit testing and verification phase. As soon as enough data is collected from this chip a manuscript will be written and submitted to a high-impact journal.

The ""body"" theme has produced an active vision stereo set-up that combines neuromorphic vision sensors and neuromorphic processors to test and validate (or invalidate) neuro-biological models of stereo perception. This is a highly interdisciplinary effort that combines multiple fields. Data is still being collected and parts of the set-up and experiments are still being developed. But these efforts promise to lead to results that go well beyond the state-of-the-art for hardware implementations of biologically inspired computational models of stereo vision."