Skip to main content
European Commission logo
español español
CORDIS - Resultados de investigaciones de la UE
CORDIS

Devising certifiable and explainable algorithms for verification and planning in cyber-physical systems

Periodic Reporting for period 1 - ALGOCERT (Devising certifiable and explainable algorithms for verification and planning in cyber-physical systems)

Período documentado: 2019-07-15 hasta 2021-07-14

One of the main challenges in the development of complex computerized systems lies in verification – the process of ensuring the systems' correctness.
Model checking is an approach for system verification in which one uses mathematical reasoning to conduct an algorithmic analysis of the possible computations of the system, in order to formally prove that a system satisfies a given specification.
Traditionally, model checking is done as follows. The user inputs a system and a specification to a model checker and gets a yes/no output as to whether the system satisfies the specification. Typically, when the answer is ``no'', a counterexample is also outputted, usually in the form of a computation of the system that violates the specification. This gives the user an informative output that can be used to fix the system, or, possible, the specification.
A drawback of model checking is that, in contrast with providing counterexamples for ``no'' answers, a ``yes'' answer does not include any proof, explanation, or certificate of correctness. The advantage of having such certificates is twofold: first, it would help convincing the designer of the system's correctness, and second, it can be used to gain insight into the workings of complex systems.
In this project, we address the challenge of providing certificates for the correctness of systems. The first fundamental observation is that certificates, or “explanations”, are context specific, and cannot be devised for the general model checking problem. Thus, we focus on finding contexts where meaningful notions of explainability can be defined, and study those notions from an algorithmic perspective.
Specifically, we focus on three contexts:
1. Explainability of multi-agent pathfinding, where we provide a visual explanation as to why several robots do not collide while taking their respective paths.
2. Invariant synthesis for dynamical systems, where we study a notion of proof for the non-termination of dynamical systems.
3. Explanations for the correctness of logical control structures, where we focus on certifying why finite-state systems exhibit some correctness properties.
During this project, we have developed explainability notions for the three contexts mentioned above, and characterized the complexity of the algorithmic problems pertaining to them, as well as provided practical implementations, where applicable.
The types of explanations vary widely depending on the context, suggesting that (as expected), uniform notions of explainability either do not exist, or are less intuitive. Indeed, speaking, a notion of explainability is defined by what is being explained (e.g. non-colliding paths for robots, non-termination of a dynamical system, or correctness of some control structure) and by whom it is being explained to (a human in the the multi-agent pathfinding setting, a arbitrarily-powerful computer verifier for dynamical systems, and an efficient verifier for control structures).
The work in this project comprised mostly theoretical research, accompanied by implementations, when relevant.
Initially, the research consisted of finding appropriate contexts for explainability. A literature review, accompanied by liaising with people in the industry, resulted in three identified contexts: invariant synthesis for dynamical systems, explainability of multi-agent pathfinding, and a study of symmetry in control structures.
The main results are as follows: (1) we developed a notion of invariants for continuous linear dynamical systems, and provided algorithm bounds for the problem of deciding their existence. (2) We developed a framework of explainable multi-agent pathfinding (MAPF), along with complexity bounds for the general problem of finding them. We then implemented algorithms for explainable MAPF, both by directly implementing our ideas, and by incorporating them into existing algorithms in MAPF. The evaluation of these techniques is ongoing. (3) we proposed a generic meta-notion of symmetry for control structure, and studied concretizations of it in the context of probabilistic transducers, deterministic transducers, and games.
The results appeared in the proceeding of several conferences, as well as workshops.
All our results advanced the state of the art (a little bit) in their respective domains. Specifically, a major contribution of this work is the definition of notions of explainability, and the demonstration of their variance between specific contexts.
The potential impact of this work is on two fronts: first, the explainability of MAPF, if adopted properly in industry, could render many operations much quicker and safer, for example air traffic control, and warehouse robotics, which are a hugely dominant factor of life nowadays.
Second, studying and exploiting symmetry in control structure may allow for more scalable verification procedures, which are a crucial phase of software and hardware development, and improve security and safety.
An Invariant for a Linear Dynamical System