## Final Report Summary - SPHINX (A Co-Evolution Framework for Model Refactoring and Proof Adaptation in Cyber-Physical Systems)

Computers that control physical processes, thus forming so-called cyber-physical systems (CPS), are today pervasively embedded into our lives. From an engineering viewpoint, CPS can be described in a hybrid manner in terms of discrete control decisions (the cyber-part, e.g. setting the acceleration of a car to keep safety distance) and in terms of differential equations modeling the entailed continuous dynamics of the physical world under control (the physical part, e.g. motion). The key challenge is how to ensure correctness of a CPS in order to avoid incorrect control decisions w.r.t. safety requirements (e.g. a car with adaptive cruise control will never collide with a car driving ahead). This overall challenge is intensified by the fact that establishing correctness is not a one-shot effort, since incremental development of CPS is common practice. Evolving from proof to proof is a highly challenging matter, even if we 'only' wanted to re-use already proven parts. The Sphinx project set out to address this challenge by providing a co-evolution framework and software tool for verification-driven engineering supporting model refactoring and corresponding proof adaptation for CPS. Sphinx supports modeling of CPS and evolution of these models with recurring refactoring operations in a way that is amenable to proof adaptation. It provides a catalog of refactoring operations tailored to CPS models, and identifies the impact of these model refactoring operations on the proofs that accompany models. Sphinx then adapts proofs accordingly, so that a proof of an original model becomes a proof of a refactored model.

Modeling: The project introduced the verification-driven engineering software toolset Sphinx [11], which combines hybrid theorem proving and arithmetic verification with tools for (i) graphical (UML) and textual modeling of hybrid systems, (ii) exchanging and comparing models and proofs, and (iii) managing verification tasks. Sphinx is backed by a survey on the state-of-the-art in modeling CPS, capturing correctness specifications, and verification [13], and a survey on logics for dynamic spatial systems [2]. The modeling features of Sphinx were tested with case studies on robot collision avoidance safety and liveness [3,14]. Additional case studies [4,5] were conducted to gain further insight into modeling and the nature of extending and adapting proofs. Concerning verification, the project contributed to the hybrid systems theorem prover KeYmaera X, focusing on support for scriptable proving with tactics to ultimately support proof-aware refactoring operations.

Refactoring Operations: Sphinx developed proof-aware refactoring operations for CPS [12]. These refactoring operations perform model transformations on CPS: for example, an “introduce control path" refactoring can be used to add moderate braking as an alternative control decision to an already existing automated emergency braking system in a car. The refactoring operations are categorized into two main categories: structural refactoring operations improve the design quality of a model (e.g. reduce duplication), while they preserve behavior; behavioral refactoring operations improve the features of a model by changing its behavior (e.g. add moderate braking). Such refactoring operations are also useful for component-based modeling and theorem proving, since decomposition in the large is a major pre-requisite of making hybrid system theorem proving applicable to industrial practice. We introduced refactoring operations to re-order statements, introduce conditionals, introduce or remove discrete/continuous behavior, and strengthen conditionals [6-8].

Proof Adaptation: The project studied how refactoring operations correspond to relations on correctness proofs [12]. The main result in proof adaptation is that the impact of refactoring operations on correctness can be characterized by notions of refinement in differential dynamic logic. These refinement notions are used to prove the correctness of refactoring operations on a metalevel (i.e. once for all possible models). Whenever such a proof requires additional assumptions, which is the typical case for behavioral refactoring operations, these correctness proofs construct proof obligations that need to be discharged when applying the refactoring operation on a specific model. The effort of discharging refactoring proof obligations is usually smaller than a complete re-verification from scratch, given appropriate proof support to reduce the proof effort: a lemma mechanism and tactic framework [9] for the theorem prover KeYmaera X [10] was developed. Lemmas allow users to reuse already proved propositions as facts in other proofs, while tactics encode the way how such proofs are obtained in the first place. Both represent a form of reuse: lemmas are convenient to structure proofs and reduce effort when the exact same question arises multiple times (even without re-computation), whereas tactics encode proof steps and proof search procedures and are, thus, even applicable when the models change slightly. Lemmas are good for structural refactorings, where a small number of simple transformations result in the same question as proved initially. Tactics are good for behavioral refactorings, when the added behavior is somewhat similar to previous proofs with only slight adaptation.

The project findings on robot collision avoidance and road traffic control contribute to the public welfare insofar as the identification of safe and unsafe control choices could become crucial in environments with increasingly autonomous systems (e.g. self-driving cars). The case studies focus on technology and systems where increasing autonomous operations impose highest safety demands on the developed systems. The findings on formulating correctness criteria indicate that we, as a society, need to establish a better understanding of how such autonomous systems interact with their environment and what we consider to be safe and correct behavior. The developed methods and tools bridge informal modeling notations used in industry with formal verification, while at the same time they support widely adopted incremental engineering practices. Moreover, results on proof adaptation and component-based hybrid systems theorem proving indicate that incremental engineering practices can indeed be transferred to formal verification techniques. The final project results -- systematic model refactoring and proof adaptation techniques for hybrid systems, together with tutorials and modeling case studies -- could thus further promote formal verification techniques in safety-critical industries.

Contact:

http://www.cs.cmu.edu/~smitsch/projects.html#sphinx

Dr. Werner Retschitzegger

Johannes Kepler University

Department of Cooperative Information Systems

Altenberger Str. 69

4040 Linz

Austria

Dr. Stefan Mitsch

Carnegie Mellon University

Computer Science Department

5000 Forbes Ave, GHC 7127

Pittsburgh, PA 15213

USA

Modeling: The project introduced the verification-driven engineering software toolset Sphinx [11], which combines hybrid theorem proving and arithmetic verification with tools for (i) graphical (UML) and textual modeling of hybrid systems, (ii) exchanging and comparing models and proofs, and (iii) managing verification tasks. Sphinx is backed by a survey on the state-of-the-art in modeling CPS, capturing correctness specifications, and verification [13], and a survey on logics for dynamic spatial systems [2]. The modeling features of Sphinx were tested with case studies on robot collision avoidance safety and liveness [3,14]. Additional case studies [4,5] were conducted to gain further insight into modeling and the nature of extending and adapting proofs. Concerning verification, the project contributed to the hybrid systems theorem prover KeYmaera X, focusing on support for scriptable proving with tactics to ultimately support proof-aware refactoring operations.

Refactoring Operations: Sphinx developed proof-aware refactoring operations for CPS [12]. These refactoring operations perform model transformations on CPS: for example, an “introduce control path" refactoring can be used to add moderate braking as an alternative control decision to an already existing automated emergency braking system in a car. The refactoring operations are categorized into two main categories: structural refactoring operations improve the design quality of a model (e.g. reduce duplication), while they preserve behavior; behavioral refactoring operations improve the features of a model by changing its behavior (e.g. add moderate braking). Such refactoring operations are also useful for component-based modeling and theorem proving, since decomposition in the large is a major pre-requisite of making hybrid system theorem proving applicable to industrial practice. We introduced refactoring operations to re-order statements, introduce conditionals, introduce or remove discrete/continuous behavior, and strengthen conditionals [6-8].

Proof Adaptation: The project studied how refactoring operations correspond to relations on correctness proofs [12]. The main result in proof adaptation is that the impact of refactoring operations on correctness can be characterized by notions of refinement in differential dynamic logic. These refinement notions are used to prove the correctness of refactoring operations on a metalevel (i.e. once for all possible models). Whenever such a proof requires additional assumptions, which is the typical case for behavioral refactoring operations, these correctness proofs construct proof obligations that need to be discharged when applying the refactoring operation on a specific model. The effort of discharging refactoring proof obligations is usually smaller than a complete re-verification from scratch, given appropriate proof support to reduce the proof effort: a lemma mechanism and tactic framework [9] for the theorem prover KeYmaera X [10] was developed. Lemmas allow users to reuse already proved propositions as facts in other proofs, while tactics encode the way how such proofs are obtained in the first place. Both represent a form of reuse: lemmas are convenient to structure proofs and reduce effort when the exact same question arises multiple times (even without re-computation), whereas tactics encode proof steps and proof search procedures and are, thus, even applicable when the models change slightly. Lemmas are good for structural refactorings, where a small number of simple transformations result in the same question as proved initially. Tactics are good for behavioral refactorings, when the added behavior is somewhat similar to previous proofs with only slight adaptation.

The project findings on robot collision avoidance and road traffic control contribute to the public welfare insofar as the identification of safe and unsafe control choices could become crucial in environments with increasingly autonomous systems (e.g. self-driving cars). The case studies focus on technology and systems where increasing autonomous operations impose highest safety demands on the developed systems. The findings on formulating correctness criteria indicate that we, as a society, need to establish a better understanding of how such autonomous systems interact with their environment and what we consider to be safe and correct behavior. The developed methods and tools bridge informal modeling notations used in industry with formal verification, while at the same time they support widely adopted incremental engineering practices. Moreover, results on proof adaptation and component-based hybrid systems theorem proving indicate that incremental engineering practices can indeed be transferred to formal verification techniques. The final project results -- systematic model refactoring and proof adaptation techniques for hybrid systems, together with tutorials and modeling case studies -- could thus further promote formal verification techniques in safety-critical industries.

Contact:

http://www.cs.cmu.edu/~smitsch/projects.html#sphinx

Dr. Werner Retschitzegger

Johannes Kepler University

Department of Cooperative Information Systems

Altenberger Str. 69

4040 Linz

Austria

Dr. Stefan Mitsch

Carnegie Mellon University

Computer Science Department

5000 Forbes Ave, GHC 7127

Pittsburgh, PA 15213

USA