Descrizione del progetto
Passi in avanti verso un apprendimento automatico affidabile
L’apprendimento automatico (ML, machine learning) è destinato a contribuire ai progressi della decarbonizzazione profonda del settore energetico. La sua capacità di imparare e di fornire soluzioni in ambienti complessi lo colloca in una posizione idonea per trasformare in modo drastico i sistemi di alimentazione. Ciononostante, gli standard comunitari di verifica emergenti richiederanno che sia possibile comprovare l’affidabilità di tutte le tecnologie di ML e di RL (Reinforcement Learning, ovvero apprendimento per rinforzo) utilizzate in applicazioni critiche in termini di sicurezza. Il progetto TRUST-ML, finanziato dall’UE, svilupperà un quadro unificato per la valutazione dell’attendibilità quantitativa dei modelli di reti neurali comunemente utilizzati nei sistemi di alimentazione. TRUST-ML adotterà un nuovo approccio di ottimizzazione convessa per valutare l’affidabilità delle soluzioni di ML in termini di prestazioni, solidità e interpretabilità. Il progetto si propone inoltre di soddisfare le esigenze emergenti degli effettivi sistemi di alimentazione.
Obiettivo
Deep decarbonization of the energy sector will require massive penetration of stochastic renewable energy resources and an enormous amount of grid asset coordination; this represents a challenging paradigm for power system operators. With its ability to learn in complex environments and provide predictive solutions on fast timescales, machine learning (ML) is posed to help overcome these challenges and dramatically transform power systems in coming decades. Emerging EU verification standards, however, will require that all ML and Reinforcement Learning (RL) used in safety critical applications be demonstrably trustworthy. In this project, we develop a unified framework, known as Trust-ML, for assessing the quantitative trustworthiness of the neural network models commonly used in power systems. Trust-ML uses a novel, convex optimization approach to assess ML trustworthiness across three key dimensions: performance, robustness, and interpretability. The approach is engineered to be scalable, and by design, it generates exact verification guarantees. Furthermore, Trust-ML is designed to meet the emerging needs of actual power systems. In particular, it can verify the performance of multi-agent RL systems in rigorous ways, and its relaxed counterpart can offer tractable, worst-case performance guarantees in the context of online learning. The resulting verification tools will be published as open-source software packages and shared widely with researchers and industry. This project will advance state-of-the-art methods across several interdisciplinary fields, it will help remove the barriers associated with machine learning deployment in power systems, and its outcomes will help push European power grids into competitive spaces. Coming from MIT with advanced training in power systems, the project PI, Samuel Chevalier, is characteristically well-suited to build Trust-ML, and his team of advisors represents a mixture of experts across power, optimization, and learning.
Campo scientifico
- natural sciencescomputer and information sciencessoftware
- engineering and technologyenvironmental engineeringenergy and fuelsrenewable energy
- natural sciencescomputer and information sciencesartificial intelligencemachine learningreinforcement learning
- engineering and technologyelectrical engineering, electronic engineering, information engineeringelectrical engineeringpower engineeringelectric power transmission
- natural sciencescomputer and information sciencesartificial intelligencecomputational intelligence
Parole chiave
Programma(i)
- HORIZON.1.2 - Marie Skłodowska-Curie Actions (MSCA) Main Programme
Meccanismo di finanziamento
MSCA-PF - MSCA-PFCoordinatore
2800 Kongens Lyngby
Danimarca