Skip to main content
CORDIS - Forschungsergebnisse der EU
CORDIS

Just-in-time Self-Verification of Autonomous Systems

Periodic Reporting for period 3 - justITSELF (Just-in-time Self-Verification of Autonomous Systems)

Berichtszeitraum: 2022-07-01 bis 2023-12-31

Engineers and computer scientists are currently developing autonomous systems whose entire set of behaviors in future, untested situations is unknown: How can a designer foresee all situations that an autonomous road vehicle, a robot in a human environment, an agricultural robot, or an unmanned aerial vehicle will face? Keeping in mind that all these examples are safety-critical, it is irresponsible to deploy such systems without testing all possible situations -- this, however, seems impossible since even the most important possible situations are unmanageably many. We are developing a paradigm shift that makes it possible to guarantee safety in unforeseeable situations: Instead of verifying the correctness of a system before deployment, we develop just-in-time verification, a new verification paradigm where a system continuously checks the correctness of its next action by itself in its current environment (and only in it) in a just-in-time manner. Since future autonomous systems will have a tight interconnection of discrete computing and continuous physical elements, also known as cyber-physical systems, we develop just-in-time verification for this system class. In order to prove correct behavior of cyber-physical systems, we develop new formal verification techniques that efficiently compute possible future behaviors -- subject to uncertain initial states, inputs, and parameters -- within a small time horizon. Just-in-time verification will substantially cut development costs, increase the autonomy of systems (e.g. the range of deployment of automated driving systems), and reduce or even eliminate certain liability claims. We have implemented our results in an open-source software framework and primarily demonstrated them for automated driving. Successful development of just-in-time verification techniques is yet more challenging than offline verification of autonomous systems, but already brings even greater rewards.
In order to realize our vision of just-in-time verification of autonomous systems, we have rethought standard offline verification techniques in many ways: (1) we have developed anytime verification to adjust to changing environments and timing-constraints, (2) we considered other surrounding intelligent agents that appear and disappear on the fly, (3) and we combined planning and control with verification techniques to repair failed verification attempts online.

Since formal methods consider all eventualities, plans are potentially refused due to a small possibility of failure. However, many long-term plans that are initially unsafe for the entire considered time horizon might become safe after a short time due to the development of the current situation. Thus, we developed a method that performs trajectory planning for two time horizons in parallel: Long-term trajectories are generated using non-formal techniques, such as established planning techniques and/or machine learning. Since the uncertainty of possible behaviors of surrounding intelligent agents grows over time, we apply our verification concept only to the first part of the long-term reference trajectories. The time horizon of this combined trajectory (first part of intended trajectory plus fail-safe trajectory) is short, such that our set-based techniques do not block overly large regions for trajectory planning. The fail-safe trajectory brings the system into a safe state. If the maneuver is safe, the next part of the long-term plan is executed; otherwise, the fail-safe trajectory is initiated.

Our new approach quickly obtains verification results for changing environments by reusing reachable sets of the previous snapshot of surrounding agents and the host system. This can be done since the reachable sets of the previous sensor update have been computed in an over-approximative way, such that they continue to contain all possible behaviors up to the time horizon of the previous verification.

The reused reachable sets of other agents and the host system are refined in an anytime fashion to tighten the over-approximation as long as time remains. We developed methods to compute several abstractions of detected agents on the fly with increasing complexity and individual properties. Each time the reachable set of an abstraction has been obtained on time, the result is aggregated, reducing the over-approximation as long as time permits.

Besides refining previously computed reachable sets, our concept also considers an on-the-fly integration of newly detected agents. Integrating new agents into a single model of all agents is computationally infeasible and also impractical since the interaction between agents, which is required for a common model, is typically unknown (unless they communicate their plans). For this reason, we consider a set of possible interaction mechanisms that are only constrained by impossible joint behaviors, e.g. behaviors resulting in occupying the same space are removed.

During the refinement of reachable sets, it is also checked which planned trajectories violate the formal specifications. To improve robustness of the approach by repairing almost safe plans, we interleave trajectory planning and verification techniques.

One aspect that is often overlooked in formal verification of autonomous systems is whether all possible behaviors of the real system can be generated by uncertain models. In contrast to standard techniques for system identification, we determine a range of system parameters rather than a single optimal value. We developed new techniques using set-based observers and optimization techniques that determine those required sets of possible system parameters.
Although a number of offline formal verification techniques have been developed for hybrid systems previously, there existed no approach that can formally verify autonomous systems with nonlinear and/or hybrid dynamics in a just-in-time manner. As a prerequisite for just-in-time verification, first prototypes of online verification have only been developed for discrete stochastic systems. While previous online verification analyzes systems during operation, it does not use techniques to ensure that results are quickly obtained and only refined later if time permits as proposed for just-in-time verification. However, the applications to real autonomous systems operating in a (partially) unknown environment, requires results on time -- even though they might be conservative and result in a conservative action.

Our novel just-in-time-verification approaches goes beyond the state of the art by reducing the complexity of classical formal verification problems in three different ways: (1) Since verification is performed only with respect to the current situation, initial states are only uncertain within sensor measurement uncertainties. (2) Only a few promising to-be-verified, future plans have to be checked. (3) The time horizon of the verification is bounded instead of having to compute until a fixed point is reached. This allowed us to combine well-developed non-formal synthesis approaches with our polynomial-time verification methods to check whether the most promising heuristic design is formally correct.

We will further demonstrate these benefits on a real autonomous vehicle and a real robot in the second half of the project. In addition, we will further generalize the developed methods so that they can be easily applied to almost all autonomous systems.
Test drive for our on-the-fly verification concept
Just-in-time verification of human-robot coexistence