Skip to main content
Przejdź do strony domowej Komisji Europejskiej (odnośnik otworzy się w nowym oknie)
polski polski
CORDIS - Wyniki badań wspieranych przez UE
CORDIS

Trustworthy Efficient AI for Cloud-Edge Computing

Periodic Reporting for period 1 - MANOLO (Trustworthy Efficient AI for Cloud-Edge Computing)

Okres sprawozdawczy: 2024-01-01 do 2025-06-30

The EU is undergoing rapid digitalisation across all sectors. Policy targets aim for an ambitious digital landscape by 2030 with 75% of EU companies using Cloud, AI, and/or Big Data and 10,000 climate-neutral highly secure edge nodes deployed. In order for these targets to be achieved, fostering resilience and industrial leadership in Europe, there are significant steps that need to be taken and sustainability aspects to be considered, including energy and resource optimisation, with no environmental harm and always with the human factor at its core. The growing use of Artificial Intelligence (AI) has already transformed many industries, enabling new capabilities and driving economic growth. However, the energy consumption and data requirements for training AI systems are becoming unsustainable and their environmental impact cannot be ignored, while end-users’ trust appears to be in decline. In fact, there is a growing number of cases where cloud or high-performance computing is required to train high performance AI models, although more and more use cases are demanding the use of AI in a distributed manner at the edge, most often across devices with very different capabilities and limitations.
But, what if it was possible to leverage the benefits of both cloud and edge computing at the same time by optimising the use of resources and AI models? What if it could be executed in a trustworthy way such that hardware, software and network specifications, capabilities and limitations were taken into consideration? Finally, what if we could develop novel algorithms to train lighter AI models and also incorporate highly specialised edge hardware accelerators, in particular neuromorphic chips, within a cloud-edge continuum? We could drastically improve the overall performance of AI applications, reducing carbon footprint, while preserving human values, towards bringing Europe’s digital decade targets one step closer.

The overall objective of MANOLO is to deliver a complete and trustworthy stack of algorithms and tools to help AI practitioners and their systems reach better efficiency and seamless optimization in their operations, resources and data required to train, deploy and run high-quality and lighter AI models in both centralised and cloud-edge distributed environments. To achieve such an endeavour and create impact, MANOLO will:
1- Design next-generation hardware-aware AI algorithms using energy-performance model architecture optimisation via novel approaches in compression, meta-adaptive learning, neural network search and growth
2- Implement a trustworthy framework for i) data management to guarantee traceability, security, and reproducibility of data, models and metadata, and ii) generation of high-quality compressed (meta)data to support the development of novel data-efficient AI algorithms
3- Introduce future-proof trustworthy AI algorithms to evaluate explainability and robustness of models and their efficiency through a holistic end-tο-end benchmarking framework
4- Optimise and automate the allocation of efficient AI models, functions (training and inference) and data in the Cloud-Edge continuum according to requirements and constraints of resources and infrastructures
5- Ensure AI trustworthiness and legal compliance development and operation by-design
6- Demonstrate, evaluate, and validate MANOLO across diverse AI-paradigms and multidimensional use cases under lab stress testing and realistic conditions in relevant environments
7- Establish synergies & collaboration activities while also exchanging knowledge and driving the sustainable exploitation of results in line with the objectives of the AI, Data and Robotics Partnership
One of the main achievements so far is the design of the MANOLO architecture as a novel framework for benchmarking AI workloads in an efficient and trustworthy manner in cloud-edge environments. This architecture integrates a set of components and functionality conforming the MANOLO library and suite; it operates in a distributed fashion in the network and runs and benchmarks AI workloads in heterogeneous devices and settings, while aggregating, benchmarking and offering recommendations for efficient and trustworthy optimisation.

For this purpose MANOLO is pushing the state of the art in the development of a collection of complementary algorithms for training, understanding, compressing and optimising machine learning models by advancing research in the areas of: data quality evaluation and generation, data compression, model compression, meta-learning (few-shot learning) and domain adaptation, frugal neural network search and growth and neuromorphic models. Complementary, novel dynamic algorithms for data/energy efficient and policy-compliance allocation of AI tasks to assets and resources in the cloud-edge continuum is being designed, without sacrificing on performance and allowing for trustworthy widespread deployment.

To support these activities, a data management framework for distributed tracking of assets and their provenance (data, models, algorithms) has been developed, which is also the foundation for the benchmarking framework of MANOLO which monitors, evaluate and compare new AI algorithms, workloads and deployments. Explainability, robustness and security mechanisms are being developed to evaluate and augment the trustworthiness of the models and system. In addition, by design, the project and the system is adhering to the Trustworthy AΙ principles via the adaptation of the Z-Inspection methodology using socio-technical scenarios workshops and will serve to help AI systems conform to the new AI Act regulation.

The MANOLO framework will be deployed as a toolset and tested in lab environments via Use Cases with different distributed AI paradigms within cloud-edge continuum settings; it will be validated in verticals such as healthcare, manufacturing, and telecommunications aligned with ADRA identified market opportunities, and with a granular set of embedded devices covering robotics, smartphones, IoT as well as using Neuromorphic chips.
Results include the design of the MANOLO architecture and framework which has been created with TAI principles in mind and offers tools to achieve TAI and Efficient AI workload in cloud-edge settings with a novel distributed architecture for AI workload execution and benchmarking with security and traceability of assets at its core, and embedded monitoring techniques and explainability algorithms. This design has followed the Z-Inspection methodology for TAI which has been tested and further enhanced within the MANOLO project.

Regarding research in the key components and areas which integrate of MANOLO, a range of algorithms have been developed which push the SoA and will soon be aggregated in the MANOLO library/suite. Some notable research includes novel techniques for evaluating how noisy a data sample is, algorithms for data distillation to enhanced (synthetic) data, aggregation of compression techniques in a single compression task, an SoA network architecture search methodology, a novel pioneer neural network growth algorithm and techniques for optimising spiking neural networks for neuromorhic chips.
Moja broszura 0 0