Periodic Reporting for period 3 - ANDANTE (Ai for New Devices And Technologies at the Edge)
Periodo di rendicontazione: 2022-06-01 al 2024-02-29
- Use of a huge number of natural resources: electricity and water (for cooling) for the operation of data centers.
- Use of huge communication resources (bandwidth), which increase energy consumption, latency (avoiding real-time applications development), surface attacks for cybersecurity threads and consequently the risks of not ensuring a good level of privacy.
- Use of huge amount of memory.
Cloud computing solutions are not suitable or inefficient for many applications that could operate much more efficiently in the EDGE.
ANDANTE aims to develop technological solutions that enable efficient data analysis in the edge rather that in the cloud and helps eliminate the drawbacks of cloud computing and therefore contributes to the sustainability of new Edge core applications by using embedded AI/ML/DL techniques. For example, in:
- Reduce the use of natural resources such as electricity and water consumption
- Reduce the use of communication bandwidth and its impact in energy consumption
- Enable the development of real-time application through low-latency solutions
- Reduce the memory footprint and its impact on reducing energy consumption in the overall system
- Increase the level of security and privacy
- Increase and enable smarter IoT solutions in many areas such as Digital Industry, Digital Farming, Transport and Smart Mobility, Healthcare and Digital Life.
To help achieve all these benefits, ANDANTE's objectives are:
- Building the AI/ML/DL foundations for future edge IoT products
- Leverage innovative integrated circuit (IC) accelerators based on artificial and spiking neural networks to create robust HW and SW platforms for application development
- Promote innovative deep-learning HW and SW solutions for future Edge IoT products that combine extreme energy efficiency, robust and powerful cognitive computing capabilities.
- Achieve effective cross-fertilization between leading European foundries, chip designers, system houses, application companies, RTOs and academic research partners
- Build and expand the European ecosystem around the definition, development, production and application of neuromorphic ICs.
WP1: Use cases requirements and system specifications defined
WP2: 2 deliverables were produced: Scorecard for completed target specs for ANN and SNN, and Cell layouts for PCM, SOT-MRAM, FeFET, TFT
WP3: The specifications for Tools and methodologies, building blocks, and Foundation IPs defined
WP4: The ASIC and platform requirements defined
WP5: Use case specifications defined
WP6 Management: the project was set up: Rules and procedures for Consortium and WPs collaboration and risk management, collaboration infrastructure based in SharePoint, the Website, and first Newsletter published
Second Period:
WP1: Use cases and system requirements completed
WP2: Morphology validation for PCM and OxRAM completed. Bitcell level device and selector integration completed
WP3: The development of 17 different tools completed. 6 FPGAs were designed
WP4: The ASICS requirements, specifications and architectural design completed. The original SNN ASIC 1.2 redefined for a design in ST P18 technology
WP5: Use case specifications of the 5 application domains completed and developments ongoing
WP6: Project management, reporting and dissemination done
Third Period
The major activities and achievements in line with the objectives and activities initially planned
WP1: Tracking of the 14 use cases and their 19 associated demonstrators to ensure alignment between the use case system specifications and their implementation
WP2: Full flow lots for the 40 nm SOT-MRAM, 28nm FDSOI and 22 FDX technologies available and three other technologies (PCM, 2T1C backend DRAM, and IGZO-based FeTFT) were implemented with the aim of creating multi-level cells – closer to analog behavior. Devices were successfully characterized
WP3: The design of ASICS, SoCs and FPGAs followed different implementation strategies. 17 tools were upgraded for training, HW generation, neural network mapping on HW, and simulation. Moreover, 6 FPGA accelerators and associated algorithms and models were achieved. The design of analog in-memory computing IPS and macro-block of neural network were developed and fabricated to exploit the capabilities of embedded NVM technologies
WP4: 6 ASICs (3 SNN, 1 Front end, 2 ANN) and 2 SoCs were designed, fabricated, validated, and characterized. 4 platforms and one board are functional. The resulting neuromorphic processors are very efficient with very low energy consumption on the order of uW for SNN ASICs, few mW for ANN ASICs and several hundred mW for the AI SoCs
WP5: 19 demonstrators across 14 use cases in 5 application domains including gathering additional data, developing AI models, implementing non-AI software components. Moreover, reference implementations and simulations were realized, and all these setups were used to evaluate the ANDANTE results
WP6: Dissemination and promotion of the ANDANTE results: 4 Thesis, 45 Patents, 4 Chapters/Books, 18 Scientific publications in Journals, 32 Scientific publications in conferences, 50 Presentations in conferences and workshops, 5 Workshops organized by ANDANTE, 5 Technical fairs and conferences, 2 Videos
ASIC 1.2 SNN processor with on-chip learning: a) Energy efficient self-supervised on-chip learning; b) Dynamic sparse training; c. Self-timed interface. For Keyword spotting, the energy per sample (inf./learning): KWS:2.2 uJ/17.8 nJ @0.5 V,13 MHz
Impact: Digital SNN for ultra-low power applications for inference and learning of low-dimensional signal
ASIC 1.3 Audio processing front-end for SNNs: • vs ARM M0+ 40LP: 24× lower (M0+: 13 mW)
Impact: SNN core will be commercialized
SoC 1.1 Cortex M55 with AI: ARM cortex®- M55 core: • 1280 DMIPS / 3360 CoreMark, • 75x in performance versus STM32H7 @ 480 MHz, • 25x in perf. versus STM32MP1 Dual A7 @ 800MHz
Impact: For low-cost or resource-constrained devices
ASIC 2.1 NeuroCorgi: Digital Feature Extractor: Ultra-low power ImageNet inference with only 23.2 mW and 6.9 ms on HD images at 30 FPS
Impact: A family of circuits will follow
SoC 2.1 Visage2: CNN inference accelerator engine with RISC-V: • End-to-end ML inference at the edge with hierarchical computing
Impact: CSME’IP library for Edge AI
ASIC 3.1 ADELIA Gen2: Multi-core CNN inference: • Energy/inference for voice activity use-case: 200 nJ with > 80% accuracy accelerator with in-memory computing (SRAM)
Impact: Baseline for further accelerators development
ASIC 3.1b: CNN accelerator with in-memory computing (FeFETs)+RISC-V: •looking into efficient integration of the concepts into sensor node circuits
Impact: Researching FeFET memory in the context of analogue NN and tinyML applications
ASIC 3.2: Inference analog NN with RRAM: • Researching RRAM memory in the context of analogue NN tiny ML applications at the extreme edge
Impact• Basis for developing smaller, smarter and efficient sensors and microcontrollers
I