CORDIS - EU research results
CORDIS

Provably Efficient Algorithms for Large-Scale Reinforcement Learning

Project description

Scaling up theoretically sound reinforcement learning

Reinforcement learning (RL) is a subfield of machine learning concerned with how intelligent agents interact with unknown environments to maximise their rewards. The potential application of RL techniques on challenging real-world problems, such as autonomous vehicle control or smart energy grids, has brought significant attention to the field. However, state-of-the-art RL algorithms are not applicable in the most promising domains, largely due to the lack of formal performance guarantees. The EU-funded SCALER project aims to address this challenge by taking a principled approach to developing a new generation of provably efficient and scalable reinforcement learning algorithms. The methodology will be based on identifying novel structural properties of large-scale Markov decision processes that enable computationally and statistically efficient learning.

Objective

Reinforcement learning (RL) is an intensely studied subfield of machine learning concerned with sequential decision-making problems where a learning agent interacts with an unknown reactive environment while attempting to maximize its rewards. In recent years, RL methods have gained significant popularity due to being the key technique behind some spectacular breakthroughs of artificial intelligence (AI) research, which renewed interest in applying such techniques to challenging real-world problems like control of autonomous vehicles or smart energy grids. While the RL framework is clearly suitable to address such problems, the applicability of the current generation of RL algorithms is limited by a lack of formal performance guarantees and a very low sample efficiency. This project proposes to address this problem and advance the state of the art in RL by developing a new generation of provably efficient and scalable algorithms. Our approach is based on identifying various structural assumptions for Markov decision processes (MDPs, the main modeling tool used in RL) that enable computationally and statistically efficient learning. Specifically, we will focus on MDP structures induced by various approximation schemes including value-function approximation and relaxations of the linear-program formulation of optimal control in MDPs. Based on this view, we aim to develop a variety of new tools for designing and analyzing RL algorithms, and achieve a deep understanding of fundamental performance limits in structured MDPs. While our main focus will be on rigorous theoretical analysis of algorithms, most of our objectives are inspired by practical concerns, particularly by the question of scalability. As a result, we expect that our proposed research will have significant impact on both the theory and practice of reinforcement learning, bringing RL methods significantly closer to practical applicability.

Host institution

UNIVERSIDAD POMPEU FABRA
Net EU contribution
€ 1 493 990,00
Address
PLACA DE LA MERCE, 10-12
08002 Barcelona
Spain

See on map

Region
Este Cataluña Barcelona
Activity type
Higher or Secondary Education Establishments
Links
Total cost
€ 1 493 990,00

Beneficiaries (1)