Skip to main content
European Commission logo
English English
CORDIS - EU research results
CORDIS

From FUnction-based TO MOdel-based automated probabilistic reasoning for DEep Learning

Periodic Reporting for period 2 - FUN2MODEL (From FUnction-based TO MOdel-based automated probabilistic reasoning for DEep Learning)

Reporting period: 2021-04-01 to 2022-09-30

Machine learning – the science of building systems from data – is revolutionising computer science and artificial intelligence (AI). Much of its success is due to deep neural networks, which have demonstrated outstanding performance in perception tasks such as image classification. Solutions based on deep learning are now being deployed in a multitude of real-world systems, from virtual personal assistants, through automated decision making in business, to self-driving cars.

While deep learning systems have been shown to match human perception ability in ‘narrow’ artificial intelligence tasks, they fall short in comparison with ‘strong AI’ that aims to match the levels of human intelligence. Machine learning is an essential technology to enable artificial agents, but typically only learns associations expressed as non-linear functions and lacks the ability to reason causally about interventions, counterfactuals and ‘what if’ scenarios. Progress towards true artificial intelligence requires the consideration of cognitive models of decision making that incorporate inference from data, but go significantly beyond, in that they account for agents’ cognitive state, beliefs, goals and intentions, as well as agent interactions, uncertainty and partial observability.

Deep neural networks have good generalisation properties, but are unstable with respect to so called adversarial examples, where a small, intentional input perturbation causes a false prediction, often with high confidence. Adversarial examples are still poorly understood but attracting attention because of the potential risks to safety and security in applications, as demonstrated in the attached image, where a minor adversarial modification to an image of a traffic sign results in a dangerous misclassification. Additionally, well publicised failures of deep learning systems, such as Uber’s fatal accident, Amazon’s automated hiring system and Microsoft’s bot, have raised concerns about their safety, fairness and transparency.

To enable ‘strong AI’ we need modelling frameworks for decision making for autonomous agents that encompass both statistical machine learning as well as logic-based reasoning. For such ‘strong AI’ to be safe, robust and fair, rigorous modelling and verification techniques are necessary to provide provable guarantees on the correct behaviour of the agents.

There are two major challenges that currently stand in the way. The first is conceptual, and centred on how the interaction between human and robotic agents should be devised to result in enhanced, mutually beneficial collaboration between humans and artificial agents, and how perception and cognitive aspects such as preferences and goals that inevitably influence human decisions are accounted for in these interactions. The second is primarily technical: how to devise a comprehensive, unified modelling framework that supports scalable, compositional reasoning, where both machine learning and logic-based reasoning are first class citizens, and how the key correctness properties can be expressed and verified.

This project will develop a model-based, probabilistic reasoning framework for autonomous agents with cognitive aspects, which supports reasoning about their decisions, interactions and inferences that capture cognitive information, in the presence of uncertainty and partial observability. The FUN2MODEL project objectives are:

1. Develop automated probabilistic verification and synthesis techniques to guarantee safety, robustness and fairness for complex decisions based on machine learning.

2. Formulate a game-based modelling framework for studying systems of autonomous agents with cognitive aspects and their coordination.

3. Develop a probabilistic compositional framework for quantitative reasoning about the behaviour of systems of autonomous agents with cognitive aspects.

4. Implement and evaluate the techniques on a variety of case studies with respect to safety, trust, accountability and fairness.

The outcome would be a comprehensive set of theories, algorithms and software tools for modelling, verification and synthesis of collaborating, human and artificial autonomous agents. The software developed as part of the project will be open source, built as an extension of PRISM (www.prismmodelchecker.org) where practically feasible, and the modelling language, online tutorial, case studies, demonstrations, publications and lectures will be made available for download.

If successful, the project will result in major advances in the quest towards provably robust and beneficial AI.
The FUN2MODEL project has made steady progress despite the Covid-19 pandemic striking just five months into the start of the project. The main advances to date have been grouped into the following research themes (see project website www.fun2model.org/researchthemes.php for more information):

• Safety, Robustness and Fairness Guarantees. Algorithms to compute adversarial robustness guarantees for neural networks and Gaussian process models, which draw on abstraction, branch-and-bound optimisation and convex relaxation techniques, have been developed and applied to safety assurance, fairness and Natural Language Processing (NLP). Safe execution of deep reinforcement learning systems has been enabled through formal (probabilistic) verification techniques.
• Robustness Guarantees for Bayesian Neural Networks. This strand of project developed methods for adversarial robustness guarantees for Bayesian neural networks, a variant of probabilistic machine learning that admits uncertainty. The properties cover safety, reach-avoid and certifiable robustness (lower bounds on the probability).
• Efficient Robust Learning. We studied the feasibility and efficiency of robust learning against evasion attacks from the computational learning theory standpoint to address the provably correct synthesis goal for machine learning components. The work has focused on seeking results that ensure efficiency of robust learning by imposing appropriate distributional assumptions and bounding the adversary’s power.
• Tractable Causal Inference and Reasoning. Novel methods that exploit probabilistic circuits such as product-sum networks have been developed to provide tractable methods to compute provable guarantees on robustness of Bayesian networks to causal interventions, also in the presence of data uncertainty, and structure learning that supports causality queries.
• Multi-agent Coordination and Collaboration. A comprehensive set of techniques for verification and strategy synthesis for concurrent stochastic games has been developed and implemented in the PRISM-games model checker, covering finite- and infinite-horizon properties and a large class of probabilistic and reward objectives. It enables equilibria synthesis for Nash and correlated equilibria with respect to two optimality criteria, social welfare and social fairness.
• Human-like Decision Making. A cognitive stochastic game model has been formulated and implemented using probabilistic programming, which supports model construction, inference and cognitive reasoning. A preliminary experiment has been designed and conducted to obtain insight into social trust through the classical trust game played by human participants against a custom bot whose decision making was driven by the cognitive game framework.
The FUN2MODEL project has developed strong foundations on which to further advance the science towards provably beneficial collaborations between human and artificial agents. In future, in addition to progress on safety and robustness, emphasis will be given to causality and fairness, neuro-symbolic models, modelling with perception, verification and strategy synthesis for partial observability, integration of cognitive aspects within game-theoretic models, and modelling human-robot collaborations.
Adversarial examples for traffic signs taken from http://www.fun2model.org/papers/hkww17.pdf