Skip to main content
European Commission logo
English English
CORDIS - EU research results
CORDIS
CORDIS Web 30th anniversary CORDIS Web 30th anniversary

An in-memory dataflow accelerator for deep learning

Project description

Deep neural networks improved by an in-memory computing accelerator

With the Internet of things (IoT), computing and electronics are technologies assuming an ever-increasing role in everyday life; many thus seek to provide improvements and advances to these fields. Deep neural networks, a technology inspired by biological neural networks and comprised of parallel processing units dubbed neurons that are connected by plastic synapses, has recently seen much use in cloud data centres as well as several other IoT services. Unfortunately, it is still inefficient due to the need to process millions of synaptic weight values. The EU-funded MEMFLUX project will develop an in-memory computing accelerator prototype, tailored for ultra-low latency and ultra-low-power deep neural network inference.

Objective

Deep neural networks (DNNs), loosely inspired by biological neural networks, consist of parallel processing units called neurons interconnected by plastic synapses. By tuning the weights of these interconnections, these networks are able to perform certain cognitive tasks remarkably well. DNNs are being deployed all the way from cloud data centers to edge servers and even end devices and is projected to be a tens of billion Euro-market just for semiconductor companies in the next few years. There is a significant effort towards the design of custom ASICs based on reduced precision arithmetic and highly optimized dataflow. However, one of the primary reasons for the inefficiency, namely the need to shuttle millions of synaptic weight values between the memory and processing units, remains unaddressed. In-memory computing is an emerging computing paradigm that addresses this challenge of processor-memory dichotomy. For example, a computational memory unit with resistive memory (memristive) devices organized in a crossbar configuration is capable of performing matrix-vector multiply operations in place by exploiting the Kirchhoff’s circuits laws. Moreover, the computational time complexity reduces to O(1). The goal of this project is to prototype such an in-memory computing accelerator for ultra-low latency, ultra-low power DNN inference.

Host institution

IBM RESEARCH GMBH
Net EU contribution
€ 150 000,00
Address
SAEUMERSTRASSE 4
8803 Rueschlikon
Switzerland

See on map

Region
Schweiz/Suisse/Svizzera Zürich Zürich
Activity type
Private for-profit entities (excluding Higher or Secondary Education Establishments)
Links
Total cost
No data

Beneficiaries (1)