Skip to main content
European Commission logo
English English
CORDIS - EU research results
CORDIS
CORDIS Web 30th anniversary CORDIS Web 30th anniversary

Embodied Cognitive Neuromorphic Technology

Periodic Reporting for period 1 - ECogNeT (Embodied Cognitive Neuromorphic Technology)

Reporting period: 2016-04-01 to 2018-03-31

The ECogNeT project aimed to develop a new type of intelligent sensorimotor devices by realising neural-dynamic models of embodied cognition in neuromorphic hardware. In order to use neuromorphic hardware to implement systems that can solve perceptual, motor, and cognitive tasks, in this project, cognitive architectures were developed that integrate elementary cognitive processes in a coherent computational framework. In particular, architectures for spatial scene representation and sequence learning based on Dynamic Neural Fields were elaborated. These cognitive architectures were successfully integrated with sensors and motors of robotic vehicles, realising the building blocks for more complex cognitive architecture in hardware. This is a crucial step towards a new generation of low-power and compact smart devices that will be capable to work in real-world settings in order to advance, e.g. prosthetic systems, smart environments, and assistive robots. Enabling these systems to react intelligently to changes in the environment and plan action sequences accordingly at a low power budget and in real time is an important objective for society, since, as the devices grow in number, scaling their power consumption dramatically is required.
In the project, three analog neuromorphic devices -- ROLLS, Dynap-se, and SCAMP smart camera were interfaced to three robotic vehicles -- Pushbot, Omnibot, and Khepera. In particular, a software framework was developed to, on the one hand, configure the neuromorphic devices to realise different cognitive neuromorphic architectures and, on the other hand, to stream sensory inputs to the silicon neurons on chip and read-out spiking activity of the neurons to drive the robots' motors. Cognitive architectures for sequence learning and spatial scene representation were realised in the project both in spiking network simulations and in hardware. For testing the architectures, the navigation scenario was selected, in which robots had to navigate an unknown environment, avoiding obstacles and approaching targets, and building a representation of this environment.

First, a neural-dynamic architecture for sequence learning was implemented, in which sequences of states or events are stored in plastic connections between neuronal populations on the ROLLS chip. The sequence to be learned was perceived through a robotic camera (a neuromorphic artificial retina, Dynamic Vision Sensor). Interface that linked the DVS, the robot, and the neuromorphic chip was realised on the miniature computing board Parallella. We have demonstrated functionality of this system in a closed sensorimotor loop in an exemplary scenario, in which one robotic agent "teaches" a sequence of turns to the second robot. In a spiking neural network simulation, we have explored an extension of the sequence learning architecture towards complex, hierarchical sequences, taking as inspiration bird song learning.

In the second line of work, we developed architectures for spatial representations of an environment. We have implemented different components of a neuronal architecture for simultaneous localisation and mapping (SLAM) in neuromorphic device, while simulating the overall architecture in a spiking neural network simulator BRAIN2. Moreover, in simulation, we have developed a new structure for elementary behaviours, which allows the robot to detect and analyse mismatch (in a ``hypothesis-testing'' network) between the perceived objects and the ones previously stored, autonomously. This new structure is a key component for autonomous, online learning in neuromorphic hardware.

We have also developed methods to tune parameters of neuromorphic chips autonomously, using intrinsic plasticity for Dynamic Neural Fields and evolutionary optimisation techniques. During this work, a number of principles were elaborated, which allow to realised cognitive architectures in neuromorphic hardware that uses analog neurons and synapses. These principles are: (1) using population code to represent behavioural variables; (2) using different number of randomly assigned weights between populations to emulate weights of different strength in hardware that has a limited number of synaptic weights; (3) using dynamic neural fields-connectivity to stabilise spatial representations against sensory and neuronal noise.

Thus, in the project the following critical core-stones for further development of neuromorphic cognitive robotics systems were achieved: 1. A software framework for configuring neuromorphic hardware to realise neuronal architectures for sequence learning and space representation. 2. An interface between neuromorphic hardware and robotic sensors (e.g. the neuromorphic camera DVS, Inertial Measurement Unit (IMU), Gyroscope, wheel encoders, microphones) and motors (wheels of a vehicle for controlling speed and turning).

Results of the projects were disseminated at several conferences (ISCAS 2017, RSS 2017, ICDSC 2016), Workshops/Symposiums (e.g. Neuroscience Center Zurich Symposium; Swiss Society for Neuroscience symposium; Robotics in the 21st Century Workshop), public events (e.g. Brain Fair 2017; Alpbach Technology Forum 2016; microTalk at CSEM, Switzerland), as well as at the Capo Caccia Workshop for Cognitive Neuromorphic Engineering.
The implemented in analog neuromorphic chips architectures for sequence learning and space representation present a breakthrough towards cognitive architectures for this type of hardware, which is extremely low power and efficient, but also very noisy and unpredictable in its dynamics. Being able to reliably generate sequences of states on this parallel hardware will allow us to use it for action planning and behaviour generation. The mixed signal analog/digital neuromorphic hardware is at least three orders of magnitude more power efficient than digital neuromorphic systems (consuming on the order of 4mW at average activity). This hardware also has a compact form factor (40mm2 at 180nm process), and real-time characteristics (matching the timescale of processes and events in physical environments). However, these computing devices are not "reliable" -- computation on them is not precisely reproducible and suffers from device mismatch and parameter drift. Methods, developed in this project -- use of population dynamics and attractor states generated by DNF dynamics -- help to cope with these challenges and enable reliable computation with these devices. Using this type of hardware for computation-hungry tasks, such as recognition of objects, places, and events, or planning action sequences, would transform such fields as internet of things and smart environments, surveillance and prosthetics, since more computation will be enabled locally at a low power budget.
A "connectionist" view on the obstacle avoidance neuromorphic architecture
Object tracking with DNFs on a neuromorphic smart camera SCAMP
Learning hierarchical sensorimotor sequences in spiking neural network
Robot controlled by neuromorphic chip ROLLS navigating in an arena
Robotic setup used in the project: ROLLS controlling the robot through parallella board
Workflow to realise a cognitive neural-dynamic architectures in neuromorphic hardware
A "neuronal program" -- connectivity matrix used to realise navigation on a neuromorphic chip
Filtering function of a Dynamic Neural Field