Skip to main content
European Commission logo
English English
CORDIS - EU research results
CORDIS
Content archived on 2024-05-27

Real-time Spiking Networks for Robot Control

Article Category

Article available in the following languages:

Towards optimal information storage

Different aspects of biological neurons that can be implemented through specific hardware architectures and used for motor control were extensively studied with the ultimate aim of constructing real-time adaptive robots.

Digital Economy icon Digital Economy

Although significant advances have been recently made in artificial intelligence and machine learning, major issues remain unresolved. These include learning abilities as well as the movement finesse that are difficult to be mimicked by machines, but humans and animals display ubiquitously. Do brain neural cells provide a computational platform with characteristics and representations that could permit such abilities to be expressed in machines and to be applied in practice? By investigating neural mechanisms of information processing in the brain, the SPIKEFORCE project sought to address similar questions raised on computational methods used in current computer and integrated circuit technologies. Neurons in the brain process analogue signals with continuous values, whereas their communication in the form of impulses or spikes is essential digital and asynchronous in time. Project partners research work under the coordination of the École Normale Supérieure focused on how spiking neurons enable rapid decision-making and importantly, continuous learning. Since information is stored in the brain as changes in the efficacy of excitatory synapses, it was argued that their independent operation would have a crucial benefit: maximising the information storage. Furthermore, the complementary link between the distribution of changes in the synaptic efficacy and what has been learned as well as the manner of its learning was explored. For this purpose a prototypical feed-forward neural network was employed. The task assigned to the neural network consisted of learning the largest possible number of input/output associations given a particular reliability level. Analytical techniques widely employed in statistical mechanics of distributed information systems afforded the essential means to calculate the maximal information storage capacity of the neural network. This maximal capacity was found to be dependent on a number of network parameters but, crucially, the optimal distribution of synaptic modifications contained a majority of silent synapses. Moreover, the distribution resembled the distribution of synaptic modifications reported for cerebellar synapses, illustrating the insight into learning and memory processes that can be gained by studying the synaptic responses.

Discover other articles in the same domain of application