Project description
Robust deep learning that ensures safe and fair autonomous agents
Machine learning is a field of computer science focused on 'teaching' computers to perform certain tasks without being explicitly programmed to do so. Although its origins date back to the 1950s, it has advanced tremendously in the last couple of decades. Deep learning is taking it one step further, applying machine learning techniques to the creation of artificial neural networks that increasingly mimic the anatomy and physiology of the human brain. These algorithms will be fundamental to the increasingly autonomous activities of machines and devices and their human-like interactions with people under the umbrella of the Internet of Things and Industry 4.0. The EU-funded FUN2MODEL project will develop a novel framework to ensure complex decisions are made with fairness and safety in mind.
Objective
Machine learning is revolutionising computer science and AI. Much of its success is due to deep neural networks, which have demonstrated outstanding performance in perception tasks such as image classification. Solutions based on deep learning are now being deployed in real-world systems, from virtual personal assistants to self-driving cars. Unfortunately, the black-box nature and instability of deep neural networks is raising concerns about the readiness of this technology. Efforts to address robustness of deep learning are emerging, but are limited to simple properties and function-based perception tasks that learn data associations. While perception is an essential feature of an artificial agent, achieving beneficial collaboration between human and artificial agents requires models of autonomy, inference, decision making, control and coordination that significantly go beyond perception. To address this challenge, this project will capitalise on recent breakthroughs by the PI and develop a model-based, probabilistic reasoning framework for autonomous agents with cognitive aspects, which supports reasoning about their decisions, agent interactions and inferences that capture cognitive information, in presence of uncertainty and partial observability. The objectives are to develop novel probabilistic verification and synthesis techniques to guarantee safety, robustness and fairness for complex decisions based on machine learning, formulate a comprehensive, compositional game-based modelling framework for reasoning about systems of autonomous agents and their interactions, and evaluate the techniques on a variety of case studies.
Addressing these challenges will require a fundamental shift towards Bayesian methods, and development of new, scalable, techniques, which differ from conventional probabilistic verification. If successful, the project will result in major advances in the quest towards provably robust and beneficial AI.
Fields of science
- engineering and technologymechanical engineeringvehicle engineeringautomotive engineeringautonomous vehicles
- natural sciencesmathematicsapplied mathematicsstatistics and probabilitybayesian statistics
- natural sciencescomputer and information sciencesartificial intelligencecomputer visionimage recognition
- natural sciencescomputer and information sciencesartificial intelligencemachine learningdeep learning
- natural sciencescomputer and information sciencesartificial intelligencecomputational intelligence
Keywords
Programme(s)
Topic(s)
Funding Scheme
ERC-ADG - Advanced GrantHost institution
OX1 2JD Oxford
United Kingdom