Skip to main content
Aller à la page d’accueil de la Commission européenne (s’ouvre dans une nouvelle fenêtre)
français français
CORDIS - Résultats de la recherche de l’UE
CORDIS

Intelligent Memories that Perform Inference with the Physics of Nanodevices

Periodic Reporting for period 4 - NANOINFER (Intelligent Memories that Perform Inference with the Physics of Nanodevices)

Période du rapport: 2021-09-01 au 2023-08-31

Cognitive tasks are increasingly necessary in modern electronics. The energy efficiency of associated algorithms, which rely on abundant stored parameters, is severely limited by the separation of computation and memory elements in conventional computers. NANOINFER directly addressed this challenge by developing intelligent memory chips that natively perform both memory and computing functions, using CMOS and emerging nanodevices. These chips perform Bayesian inference or neural networks algorithms. The project included theoretical investigations and intelligent memory chip designs, supported by proof-of-concept experimental demonstrations. The proposed architectures, based on spintronic and memristive memories, maximize energy efficiency by leveraging the complex physics of these emerging devices for inference operations and the storage of model parameters, and by minimizing exchanges between computation units and memory. Inference is performed using sampling algorithms and compute-in-memory functions that allow tackling difficult problems and are robust to nanodevice imperfections. NANOINFER also developed learning-capable chips, able to adapt the stored Bayesian or neural network model to new data, still leveraging the complex physics of nanodevices.

As the intelligent memories developed within NANOINFER are very low power, their use will be possible directly within smart devices, therefore reducing their reliance on data centers. This can bring massive energy savings, avoiding all the energy costs associated with the communication between smart devices and data centers. A major application is systems that have to process sensory-motor information, and all systems that have to fuse information coming from diverse sensors and/or prior information (e.g. smart sensors and biomedical chips), or vehicle driver assistance or automation. The possibility of learning within intelligent memory can also provide extreme adaptability to smart devices.
We have first explored nanodevices which can be used within natively intelligent memories. Hafnium oxide based memories appeared as excellent candidates for storing information, as they are compatible with current microelectronics technology, scalable, and reliable. We realized that when used in natively intelligent memories, hafnium oxide based memories may be used in regimes different from their traditional applications, with high benefits in terms of energy efficiency and reliability. In particular, they can be used in a stochastic regime, implementing naturally the random variables of Bayesian models. Hafnium oxide based memories also feature the possibility to adapt the stored value, providing an appropriate substrate for learning. We have also investigated spin electronics-based nanodevices called superparamagnetic tunnel junctions. These intrinsically stochastic devices can provide fantastic basic elements for the sampling operation, an important part of most of our natively intelligent memories designs.

Second, we designed complete natively-intelligent memories associating nanodevices and conventional transistors. This design work involved the codesign of adapted machine learning architectures, circuits and systems, while simultaneously testing the appropriateness of the ideas on real arrays of nanodevices. We focused on three different concepts:
- Bayesian machines that perform Bayesian inference
- binarized neural networks, which are specially adapted to natively intelligent memories
- and the Bayesian version of neural networks, which provides uncertainty quantification.

These designs achieve outstanding energy efficiency through three mechanisms. They collocate logic and memory and require extremely minimal data movement within the system. They rely on simplified arithmetic. Finally, they exploit nanodevices in adapted regimes, where they are not deterministic. Using a hybrid CMOS/memristor technology, we fabricated multiple demonstrators of natively intelligent memories, incorporating between 1,024 and 16,384 memristors, which demonstrated the viability of the natively intelligent memory concepts and provided high visibility to the project. The demonstrators can be programmed with proof-of-concept applications, e.g. gesture recognition, sleep cycle classification, or arrhythmia classification.

We have also proposed several approaches to provide these memories with learning features, each exploiting the device physics in a different yet highly efficient manner:
- hybrid analog/digital training of binarized networks, which exploits the analog physics of memory devices, but requires solely digital CMOS circuitry
- Bayesian learning, which exploits the intrinsic stochastic effects of memory devices to implement a sampling operation
- Equilibrium Propagation, which incorporates the physics of devices and circuits into the learning process.

The project results were disseminated in 19 peer-reviewed journal publications (including 8 Nature X publications) and 18 international conference proceedings (including A+ conferences in several fields: IEDM, ESSCIRC, NeurIPS, and DATE). They were also presented in 49 invited and keynote presentations at international conferences, schools, and workshops. The results led to several more applied follow-up projects. Our Bayesian machine demonstrator has attracted significant interest from the press and was covered, e.g. by Nature, IEEE Spectrum, and the French edition of Scientific American. The concept of natively intelligent memory, closer to brain memory than conventional computer memory, was presented in several outreach events.
We first presented novel approached to implementing machine learning schemes using hafnium oxide memristors and superparamagnetic tunnel junctions, optimized for maximal energy efficiency. Our experimental findings reveal two key advancements: firstly, the intrinsic capability of superparamagnetic tunnel junctions for Bayesian sensing; and secondly, an innovative method to enhance the reliability of memristors through complementary programming. This latter technique circumvents the need for error correction methods and is versatile enough for use in Bayesian inference and binary or ternary neural networks.

Second, we designed, fabricated, and tested several pioneering hybrid CMOS/nano natively intelligent memory systems. Among these, our two Bayesian machines stand out as the first memristor-based systems fabricated specifically for Bayesian inference. These machines have demonstrated proficiency in tasks such as gesture and sleep cycle recognition. Another major achievement is our memristor-based Bayesian neural network that leverages memristor imperfections to model probability distributions, showcasing its potential in applications like arrhythmia recognition with integrated uncertainty assessment.

Finally, we explored various learning methodologies and their applicability to our designs. A remarkable achievement in this domain is the successful experimental training of 16,384 memristors for cancerous mammogram recognition using a Bayesian learning approach that capitalizes on memristor imperfections. This accomplishment marks the largest "nanoBayesian" experiment conducted to date. Additionally, we have evidenced that equilibrium propagation, a learning algorithm inspired by physical principles and particularly suited for natively intelligent memories, can be effectively scaled to tackle complex problems.
Photograph of a Bayesian natively intelligent memory using a hybrid CMOS/nanodevice technology
Mon livret 0 0