Skip to main content
Przejdź do strony domowej Komisji Europejskiej (odnośnik otworzy się w nowym oknie)
polski polski
CORDIS - Wyniki badań wspieranych przez UE
CORDIS

Faster and More Energy Efficient Machine Learning for Embedded Systems

Periodic Reporting for period 1 - EmbeDL (Faster and More Energy Efficient Machine Learning for Embedded Systems)

Okres sprawozdawczy: 2023-04-01 do 2024-03-31

Embedl’s EIC Accelerator aims at scaling its advanced AI technology business. This is done by streamlining development and sales for scaleability, continuously developing and improving the world-leading model optimization SDK product, and broadening its product portfolio with additional products. Significant efforts to reach a global market and position Embedl as a leader in edge AI is also included.

Project Context
The current state-of-the-art deep learning optimization on embedded systems relies on deep learning experts. However, they are highly sought after, and expensive, and manual optimization is time-consuming and not scalable. In addition, hardware updates or deep learning model updates (device functionality updates) are very frequent. The result is that manual optimization must be conducted again from scratch. This makes this endeavor highly unsustainable.


Objectives
Embedl automatically compresses and optimises DL models for specific hardware. The benefits for users are
clear:
→Significantly reduced optimisation time (thus reducing cost and reducing the resources needed)
→Increased performances of the device (larger DL model run on the same device), up to 1000% increase in
execution time
→Decrease energy consumption of the device (up to 90%)
→Potential for significant cost savings by using cheaper hardware to run the same DL model (up to 44% cost reduction in
hardware price).


Pathway, how, plan
The technical work packages in the project are focusing on four major areas:
Improve software development infrastructure to scale with more customers, more developers and more products. (ongoing and partly complete)
Technical advances in the product portfolio.

How the results are expected to contribute
Strengthening our product offering and our ability to support many more customers and more diverse customers will contribute to our capability of becoming a world leader supplier in embedded AI development tools.
We have improved the scalability of our software development infrastructure and implemented several security policies.


Our flagship product, the Model Optimization SDK, has been significantly improved and extended with Neural Architecture Search and Quantization support. The supported hardware platforms have beed extended by Texas Instruments (TIDL), STMicro (STM32Cube.AI) and Qualcomm (QNN).


We have developed an MVP of a new product, and successfully used it in customer projects.
It is essential to increase access to markets and finance and we have spent significant efforts on both during the first year of the project. We have started a dialog with multiple investors for our upcoming funding round.


To increase market awareness we have been active on-site at a number of business events, where we have presented on stage, hosting a booth and/or just visiting and networking in Europe and US. We have also significantly increased our online visibility with blog posts, webinars, LinkedIn posts, and online advertising.
Moja broszura 0 0