Skip to main content
Go to the home page of the European Commission (opens in new window)
English English
CORDIS - EU research results
CORDIS

Spatio-Temporal Representation on Neuromorphic Architecture

Periodic Reporting for period 1 - STRoNA (Spatio-Temporal Representation on Neuromorphic Architecture)

Reporting period: 2018-10-26 to 2020-10-25

The latest achievements in artificial intelligence and neural networks, especially deep neural architecture in large-scale neuromorphic hardware implementation such as SpiNNaker , and in cognitive robotics and neurorobotics, with the widespread use of robots such as iCub and the latest Pepper platform, provide the opportunity to significantly advance our understand human cognition and brains and to reach human-level artificial intelligence. One of the key success factors in deep learning is its hierarchical structure inspired by biological processes in the primate visual cortex, as with convolutional deep networks able to learn rich representations. They are grounded in optimization methods with high precision for training may consume large training datasets and computational resources to learn complex tasks. That gives human level performance in static image recognition but raises adaptation issues. SpiNNaker is a neuromorphic computer architecture in massively parallel computing platform based on spiking neural networks (SNNs) in which neurons communicate by temporal code. Spike Timing Dependent Plasticity (STDP) is believed that it underlies learning and information storage in the brain. SpiNNaker is based on spiking, recurrent neural dynamics for very fast (even instantaneous) learning, online adaptability and extensibility, robustness against noise and computations with low numerical accuracy. However, it has preliminary results so far.

The simple integration is not enough to meet requirements in developmental cognitive robotics and neurorobotics. We cannot rely only on the exponential increase in computing power to produce state of the art performance on a number of robotic tasks such as object/human behavior recognition and skill learning. The aim of STRoNA (Spatio-Temporal Representation on Neuromorphic Architecture) is to define the technology that will map a computational architecture onto neuromorphic computing circuits, hence to develop a cognitive model with spatio-temporal representation and learning algorithm for humanoid robots.

The principal research objectives of the project are: (i) to investigate which spatio-temporal representations of spikes (or neural action potentials) can be used to achieve human level performance on visual perception; (ii) to develop a novel method to process spatio-temporal representation on a neuromorphic architecture to enable learning in online and interactive contexts; and (iii) to validate and adapt the developed system in real world robotics applications.
The objective of WP1 is to investigate how the spike-trains can be spatially related in the neocortex and how spikes and plasticity can be processed for the visual perception.
Mammalian retinas encode visual information into multiple representations using distinct features, likely following the principle of encoding as much information as possible with the fewest signals. In the first layer of the network, bipolar cells sample the input in a nearby region. Synaptic weights are computed according to a two-dimensional Gaussian distribution and stored in a convolution kernel. Incoming weights are then normalized, so that they sum to one, and scaled by the required weight so that a single spike will activate the characterized bipolar neuron; this has the effect of distributing the required activity across the entire receptive field. Each bipolar cell excites a ganglion cell and an amacrine inter-neuron, the latter enforcing competition as it inhibits ganglion cells connected to neighbouring bipolar cells.

The objective of WP2 is to lay the foundations for representation and learning with massively parallel and dynamic neural substrate. A widely observed principle in biological brains is the use of topographic maps, wherein two-dimensional topological relationships are preserved in projections from one brain region to another.
Where each neuron in the source layer innervates neurons in the target layer, the mapping between the layers is supposed to be topographically arranged as neighbouring neurons in the target layer are responsive to the activity of neighbouring neurons in the source layer through the mechanism of homeostasis. The model has been implemented in the context of a structural plasticity framework. In line with their implementation, it is extended in depth by adding hidden layers between the source and target layers, as shown in Fig. 1 (b).

The objective of WP3 is to design, setup and run experimental evaluations.
To verify the proposed networks and explore the characteristics of structural plasticity in depth, the MNIST dataset is used. The MNIST dataset is a large database of 28 × 28 handwritten digits that is commonly used for training various image processing systems. These image sequences are converted using a Neuromorphic Vision Sensor emulator (pyDVS). the 3 considered cases: STDP in conjunction with synaptic rewiring (Case 1) and synaptic rewiring only without or with lateral connections (Case 2 and 3, respectively).
This project investigated structural plasticity to replicate the visual pathways in the brain. While most of learning methods for deep neural networks tune weights of connections in a xed structure, structural plasticity takes account of changes in structure or connectivity between neurons as well as their weights. The structural plasticity mechanism relies on the formation of neuronal topographic maps between brain regions by reducing receptive elds of neurons. The model has been implemented in the context of a structural plasticity framework on SpiNNaker, operating in real time in parallel by activity-independent and/or activity-dependent processes.
To simulate structural plasticity in deep layers on SpiNNaker, it was needed to accustom many spiking-neural network frameworks. The proposed deep structural plasticity is described using the PyNN which is a simulator-independent description language. sPyNNaker is the simulator that executes the specification on the largest neuromorphic architecture - the 1 million core SpiNNaker machine. To build topographic map for hand-written digits, the MNIST dataset are converted using a Neuromorphic Vision Sensor emulator (pyDVS).
Regarding the iCub humanoid robot, the fellow gained experience its robotic operation system-YARP, iCub Simulator, eco-system Robotology and so on.

The results of implementation and experimental evaluations as publications have been submitted in conference proceedings and have a plan to publish in journals for replication, with clear presentation about how the system works and learns to achieve human-level performance on a neuromorphic architecture.
Additionally, w.r.t. publications during the fellowship, the fellow published three journal papers as the first author at Robotics and Autonomous Systems (RAS) and Advanced Robotics (AR), one book chapter as the second author.
He participated in a Turing Workshop on 'Robotics and AI for Health and Social Care' to achieve a shared awareness of the strengths and complementarity of UK and International projects in this field, to identify the main research challenges, and to foster interaction and collaboration, and also participated the 7th HBP Summit and Open Day in Athens, Greece where the Manchester team opened a booth to show the demonstration of SpiNNaker simulation.
strina-picture1.png
My booklet 0 0