CORDIS - Wyniki badań wspieranych przez UE
CORDIS

Bio-inspired Spin-Torque Computing Architectures

Periodic Reporting for period 4 - bioSPINspired (Bio-inspired Spin-Torque Computing Architectures)

Okres sprawozdawczy: 2021-03-01 do 2022-07-31

THE AIM OF THE BIOSPINSPIRED PROJECT WAS TO DEMONSTRATE THAT COUPLED SPIN TORQUE NANODEVICES CAN REVOLUTIONIZE BIO-INSPIRED COMPUTING. We showed that they provide the key to implementing abstract computing concepts inspired by the non-linear dynamics of the brain, concepts that for the most part had only been modelled until now. To achieve this goal, we made all the scientific progress that was needed: material science for spin-torque nanodevices, physics of the dynamical coupling between these devices, bio-inspired computational models based on the hardware implementation of non-linear dynamics, and computing devices built from complex networks of interconnected spintronic neurons and synapses.

Biological systems have impressive computing abilities. We, humans, are able to recognize people we barely know in just a fraction of second, even in a crowd. And we do this incredibly complex task with 104 times less power than any supercomputer! It therefore makes sense to take inspiration from biology to build data processing systems to perform low power ‘cognitive’ tasks on-chip and complement our traditional microprocessors . Today, incredible advances towards understanding the way the brain computes have been made, and the machine-learning community is developing impressively efficient ‘brain-like’ methods to perform cognitive tasks. Deep Neural Networks are now the working principle of virtual assistants on smartphones, and used for a wide range of classification tasks . Indeed, we need to invent new ways to rapidly and automatically make sense of the massive amount of digital information we generate every day, and neural networks are intrinsically suited for such cognitive tasks.

However neural networks cannot be satisfying in their present software version. It is the massively parallel, analog and relatively uniform architecture of biological systems that confers them their greatest assets: speed, low energy consumption and tolerance to defects. When mapped on the sequential architecture of our computers, bio-inspired algorithms lose these precious qualities, and suffer from the excessive energy dissipation that limits the performances of our processors. Building bio-inspired hardware is therefore extremely relevant today due to their application scope and low energy consumption .
Nanodevices for bio-inspired computing. As biological systems, bio-inspired hardware should be composed of a huge number of computing nodes and connections in order to be efficient (there are 10^11 neurons and 10^15 synapses in the brain). While CMOS technology might appear as the most mature technique to build such systems, it suffers from the high number of transistors required to imitate synapses and neurons, and the related power dissipation issues. Fabricating devices that can emulate synapses and neurons at the nanoscale therefore appears as the key for the development of dense, efficient bio-inspired chips. But this requires adapting the existing abstract bio-inspired computing models to specific hardware implementations. The materials, the physics that will allow nanodevices to embody interesting functions, the overall hybrid CMOS-nanodevice architecture and the bio-inspired computing models need to be thought together.

Neurons and synapses are dynamical objects. The synapses ability to transmit information is modulated in time according to the activity of the neurons that they interconnect, which allows the network to learn. Neurons can be modelled as nonlinear oscillators that adjust their rhythms depending on incoming signals . The brain itself displays a wealth of phenomena characteristic of non-linear dynamical systems: synchronization of oscillating neural assemblies , criticality and even chaotic behaviour . Computing models called Recurrent Neural Networks take inspiration from these rich brain dynamics for performing data processing. Recurrent neural network have incredible computing capabilities and can implement any kind of dynamics (from fixed points to chaos) . Attractors can be leveraged to store memories. And transient dynamics can be used to process input time sequences provided by sensors or to generate trajectories as outputs for motor control .

In bioSPINspired, we showed that spin torque nanodevices, which are multi-functional and tunable nonlinear dynamical nano-components, are ideal building blocks for the hardware implementation of models that harness the power of complex non-linear dynamical recurrent networks for computing.
We initially demonstrated that a hardware spintronic oscillator, despite its nanoscale dimensions, could mimic neuronal activity and recognize spoken digits uttered by various speakers using its transient time-multiplexed dynamics. This was achieved with an accuracy equivalent to that of software-based solutions (Nature 2017). Subsequently, we successfully exhibited how four interconnected spintronic nano-oscillators could carry out intricate pattern classification tasks by synchronizing their oscillations, emulating the function of biological neurons (Nature 2018).

Our spintronic nanoscale neurons uniquely broadcast their computational output in the form of radio-frequency signals, which offer superior communicative abilities compared to traditional wire-based connectivity. We ingeniously designed synapses that are compatible with these neurons and are crafted from identical materials. Our experimental findings showed that these synapses performed a weighted sum on the radio-frequency signals emanating from the neurons, precisely mirroring the operations of contemporary neural network algorithms (Neuromorphic Computing and Engineering 2021). These outcomes substantiate the potential for harnessing the dynamics of spintronic devices and modules to miniaturize, accelerate, and reduce the energy consumption of artificial intelligence hardware.

Moreover, we developed innovative learning algorithms that leverage the dynamics of physical systems for learning, as opposed to complex mathematical procedures that do not align with the distinct characteristics of such hardware. We demonstrated that the equilibrium propagation algorithm enables a natural form of error backpropagation within a physical system (NeurIPS 2019) and confirmed its scalability to complex tasks (Frontiers in Neuroscience 2021).
Our pioneering findings have sparked an upsurge in neuromorphic activities within the spintronics community, which is now actively exploring the multifunctionality and reliability of spin-based devices for developing low power, high-accuracy, and embedded artificial intelligence hardware.
Four spintronic oscillators learn to recognize vowels