Periodic Reporting for period 2 - CLariNet (A novel control paradigm for large-scale hybrid networks)
Berichtszeitraum: 2023-04-01 bis 2024-09-30
Control of large-scale NHDs is a very complex problem due to the large size of the networks, the presence of disturbances, and the hybrid dynamics, while a limited computation time is available. State-of-the-art control methods are not suited for large-scale NHDs as they either suffer from computational tractability issues or impose additional restrictions, resulting in a significantly reduced performance. To address this problem, I will create a new on-line control paradigm for large-scale NHDs based on an innovative integration of multi-agent optimization-based and learning-based control, allowing to unite the optimality of optimization-based control with the on-line tractability of learning-based control. I will bridge the gap between optimization-based and learning-based control for NHDs through the use of multi-scale multi-resolution piecewise affine models, explicit consideration of the graph structure of the network, the unique knowledge and experience I have in both optimization-based control and learning-based decision making, and an interdisciplinary integration of approaches from systems and control, computer science, and optimization.
This will result in systematic, very reliable, highly scalable, high-performance on-line control methods for large-scale NHDs. I will demonstrate their feasibility, benefits, and impact for green multi-modal transportation networks and smart multi-energy networks.
The first group of 3 PhD students and the postdoc started in Fall 2021, a second group of 3 PhD students has started in Fall 2022 and together with the PI worked on RL1-6 and RL8.
In the mean time the first achievements have been realized including the development of new methods for learning-based control for PWA systems with constraints (RL1, published in Automatica), the development of new network metrics and generalized partitioning algorithms for large-scale networked systems (RL2), an efficient control method for systems with real-valued and discrete dynamics based on reinforcement/supervised learning for discrete-actions and online optimization using real-valued linear programming (RL3), an multi-agent reinforcement learning approach for large-scale systems using distributed MPC as a function approximator (RL4), a scenario reduction algorithm endowed with performance and feasibility guarantees for uncertain linear systems (RL5), a theoretical performance bound for uncertain linear systems comparing an MPC control that uses an estimated model and the ideal infinite-horizon optimal controller
with knowledge of the true system (RL6, published in IEEE Control Systems Letters). Moreover, as regards the applications (RL8) in the field of intelligent transportation systems several methods for integrated learning-based and optimization-based control have been developed and demonstrated in simulation (RL8, published in Control Engineering Practice). In the field of smart energy systems, we have created a publicly available benchmark (including software codes and a dataset) to test distributed control techniques on the electricity network of the European economic area, called European Economic Area Electricity Network Benchmark (EEA-ENB).
My team and I will tackle three major challenging research questions that have to be addressed to obtain systematic, efficient, reliable, safe, and scalable multi-agent IOL control methods for large-scale NHDs:
C1: How to deal with the complexity of the NHD control problem, and how to obtain a balanced trade-off between tractability and performance?
C2: How to effectively integrate optimization-based and learning-based control methods for NHDs in such a way that the advantages of both methods are preserved?
C3: How to obtain coordination among the IOL control agents in such a way that all the control agents together contribute to the efficient, cost-effective, and reliable operation of the entire system?