Workshop on 'Bayesian optimisation, experimental design and bandits', Sierra Nevada, Spain
The past few years have seen important advances in learning approaches for sequential decision-making. These have occurred in different communities, who use different terminology: Bayesian optimisation, experimental design, bandits (x-armed bandits, contextual bandits, Gaussian process bandits), active sensing, personalised recommender systems, automatic algorithm configuration, reinforcement learning and so on.
The same communities also tend to use different methodologies. Some, for example, focus more on practical performance while others are more concerned with theoretical aspects of a problem. As a result, there are a diverse range of methods for trading off exploration and exploitation in learning.
The event will seek to bring together stakeholders in order to identify differences and commonalities, propose common benchmarks, review the many practical applications and narrow the gap between theory and practice.For further information, please visit: here(opens in new window)