One of the main challenges in the development of complex computerized systems lies in verification -– the process of ensuring the systems' correctness.
Model checking is an approach for system verification in which one uses mathematical reasoning to conduct an algorithmic analysis of the possible computations of the system, in order to formally prove that a system satisfies a given specification.
Traditionally, model checking is done as follows. The user inputs a system and a specification to a model checker, and gets a yes/no output as to whether the system satisfies the specification. Typically, when the answer is ``no'', a counterexample is also outputted, usually in the form of a computation of the system that violates the specification. This gives the user an informative output that can be used to fix the system, or, possible, the specification.
A drawback of model checking is that, in contrast with providing counterexamples for ``no'' answers, a ``yes'' answer does not include any proof, explanation, or certificate of correctness. The advantage of having such certificates is twofold: first, it would help convincing the designer of the system's correctness, and second, it can be used to gain insight into the workings of complex systems.
A similar drawback occurs in an application of model-checking to robotic planning. There, a suggested plan issued by the model checker may seem complicated or counter intuitive to a human user. Thus, one would want some explanation of the plan, that would convince the user of its correctness, and, possibly, its optimality.
The aim of this proposal is to address the challenge of providing certificates for the correctness of systems, and analogously -- providing explanations for plans. This involves several challenges: finding contexts in which explanations and certificates have reasonable definitions, and then - devising a suitable theoretical algorithmic framework, and a practical, scalable implementation.
Call for proposal
See other projects for this call