## Rezultaty

The basic cycle of TA verification tools consists of taking a zone (a conjunction of difference constraints on clocks) and applying to it some operations in order to produce its successors on which the same process is iterated.
The number of these zones and the size of their representations is a major bottleneck for TA verification. Zones are typically represented as difference bounds matrices (DBMs) of size quadratic in the number of clocks, and it has already been known that their dimensionality can be reduced in each state to the number of clocks active in that state.
More recent work in AMETIST shows that performing a finer analysis of the structure of the TA, may yield for some states DBM representations which can be as good as linear in the number of clocks.
Among the other important contributions to improving performance of TA tools we mentioned ideas inspired by partial order methods, symmetry reduction, and more clever memory management during exploration.
By implementing these new ideas, the project succeeded to improve the performance of UPPAAL by several orders of magnitude.

The zone-based technology has its roots in verification where the temporal uncertainty is viewed as coming from the external environment and the system should be correct with respect to all environment choices.
Around the beginning of the AMETIST project it was observed that when uncertainty is associated with the scheduler decisions, for example in problems of scheduling under certainty, sometimes there is a unique successor among the uncountably many which gives the optimum (non-lazy schedules).
Consequently the problem can be solved without using zones at all but rather using vectors of clock variables. This way certain problems can be formulated as shortest paths in discrete weighted graphs and solved much more efficiently. They can also benefit from existing search algorithms for on game graphs in order to find sub-optimal schedules for scheduling with discrete uncertainties.
Even in the case of dense uncertainty coming from the environment/adversary side, there might be some clever ways to avoid zones. When the adversary has a choice in some interval I = [a, b] we may still relax the problem by assuming only a finite subset I0 of the interval.
Solving the problem of synthesizing an optimal scheduling strategy with respect to I0 may have the following consequences:
- The actual value of the chosen strategy with respect to I may be worse than the value computed based on I0. This is more problematic for qualitative criteria where the system may be correct with respect to I0 but not with respect to I. For quantitative criteria this is less critical because we already accept sub optimal solutions when the problem is large.
- During execution the adversary can make a choice in I-I0 and the system will find itself in a state for which the optimal action has not been computed. However in many problems some default rules can be used to determine the action at such a state based on the optimal computed action in its neighbourhood.

Profiting from the new tools, the AMETIST consortium has tackled more than 20 industrial sized case studies involving the application of timed automata technology which were provided by its industrial partners (Bosch, Cybernetix, Axxom, Terma) or obtained from other sources:
- Cybernetix case study (Task 3.1)
- Terma case study (Task 3.2)
- Bosch case study (Task 3.3)
- Verification and improvement of the sliding window protocol
- Modeling and verifying a Lego car using hybrid I/O automata
- Correctness of an intrusion-tolerant group communication protocol
- Testing conformance of real-time applications by automatic generation of observers
- A model of the Welch/Lynch Clock Synchronization Protocol
- Modelling and Analysis of a Leader Election Algorithm for Mobile Ad Hoc Networks
- Synthesis of Safe, QoS Extendible, Application Specific Schedulers for Heterogeneous Real-Time Systems
- Description and Schedulability Analysis of the Software Architecture of an Automated Vehicle Control System
- Verification of Asynchronous Circuits using Timed Automata
- Analysis of a Protocol for Dynamic Configuration of IPv4 Link Local Addresses using Uppaal
- Model checking dependabiliy attributes of wireless group communication
- Cost-Optimisation of the IPv4 Zeroconf Protocol
- Model Checker Aided Design of a Controller for a Wafer Scanner
- Analysis of a Biphase Mark Protocol with Uppaal and PVS
- Automatic verification of the IEEE 1394 root contention protocol with KRONOS and PRISM
- Model Checking the Time to Reach Agreement
- Timed Automata Based Analysis of Embedded System Architectures
- From StoCharts to MoDeST: a comparative reliability analysis of train radio communications
The general conclusion is that using our new methods we can handle bigger problems faster and in a much more routine manner than at the start of the project. Applications of timed automata technology to analysis of scheduling problems, embedded system architectures and communication protocols have been very successful.
Our case studies provide insight in the current state-of-the-art of timed automata technology, provide modelling patterns that can be used by future users, and benchmarks for verification and analysis tools.
NB The results of the AXXOM case studies are listed as a separate result.

In ordinary timed automata (TA) essentially only time can be optimized as it can be represented by an additional clock that measures its value. There are many situations in which the relevant cost functions are richer, involving cost variables that grow at different rates at different states or are associated with certain transitions.
Some motivating examples are memory and power consumption in computers, setup and machine occupation costs in manufacturing as well as non uniform penalties for missing deadlines in systems with soft constraints. Priced timed automata constitute a natural extension of timed automata with such cost variables.
Although the dynamics of these variables renders the automata hybrid rather than timed, these variables do not really participate in the rest of the system dynamics (they do not appear in transition guards) and many problem are still decidable for this model.
Some new significant decidability results and algorithms for priced timed automata have been obtained in the second and third years of the project and have been integrated into to UPPAAL CORA tool.
We provided templates for modelling classical scheduling problems extended with costs, such as job-shop scheduling, task graph scheduling, aircraft landing, and vehicle routing with time windows. Also, we demonstrated the new modelling framework on some industrial scheduling problems, i.e. steel and lacquer production.
The development of the priced timed automata framework is one of the major scientific results of the AMETIST project.

Probabilities can be added to timed automata in two ways.
The first is by adding probabilities to transitions, extensions, which transform reachable sets from unions of zones into probabilities on zones.
The second and more challenging extension is to replace delay bounds by probabilities on delays, similar to the way this done in continuous time Markov chains.
Numerous topics related to both extensions were investigated within the project leading to results that clarify some of the relation between timing and probability, in particular for continuous-time Markov chains and branching time. A stochastic variant of the Axxom case study has been treated.

Real life problems, like the scheduling and resource allocation problems faced by the customers of our industrial partner Axxom, do not fall exactly within a stylized class of problems like the job shop.
Such problems have additional constraints concerning the relative distance between tasks, occupation of certain resources, such as mixing tasks, during the execution of several steps, different penalties for missing deadlines, etc. Many of these details are hidden inside various Excel tables used by Axxom's Orion PI tool.
Much effort have been put during AMETIST lifetime in trying to understand these features, give them a rigorous semantics and devise a language to express them in a way that translates smoothly into timed automata. This work is described in detail in Deliverable 3.4.4 and we mention only the work done on using timed automata, using stochastic models and optimization-based methods.
Although intellectually this work is less challenging than inventing new mathematical models or algorithms, it is of great importance for the future acceptance of timed automata based methods.
Another extension related to the Axxom case study is concerned with using probabilities to model machine failures, hopefully in a more refined way than the current macro level treatment of failures in Axxom tools.

At the beginning of the project, one class of problems of scheduling under uncertainty was explored, where the uncertainty is associated with task durations. For this problem interesting results that propose an optimality criterion more refined than simple worst-case performance.
A scheduler synthesis algorithm was implemented for this class of problems and the schedules it generated on simple examples turned out to be much more adaptive to the real duration of tasks than other types of solutions.
In the second and third year the complementary problem of scheduling under discrete uncertainty has been tackled. It covers the situation where the choice of tasks that need to be executed may depend on the results of other tasks, results that become known only after the termination of these tasks.
Such situations are very common in scheduling of real time programs, where the results correspond to testing conditions inside if statements, but it can also be found in manufacturing, for example when certain production steps may terminate successfully or fail.
A modeling framework for this problem using conditional dependency graphs, transformed into timed automata with discrete adversaries have been developed. Several exact and heuristic algorithms for synthesizing optimal and sub-optimal scheduling policies for this problem have been implemented.
The most recent progress with heuristic depth-first search allowed us to synthesize adaptive schedulers for problems with 400 tasks and up to 20 conditions.

During the second and third years the applicability of mixed integer linear programming (MILP) as a tool for TA analysis has been explored.
An interesting observation is that relaxed models (with integers interpreted as reals) may sometimes give more interesting lower bounds on the costs that extend a partial solution and hence can be used for this purpose with reachability-based algorithms.
All these result were integrated into the prototype tool TAOPT. For bounded horizon problems, questions related to timed automata reachability are transformed into satisfiability problems for difference logic.
Scheduling problem do translate naturally to this logic without going through automata. During the project we have developed a series of solvers for this logic, culminating in the current version of the performant solver DL-SAT and jat.
The tool ELSE can serve as a front end for such solvers by generating efficient difference logic formulae (exploiting partial-order ideas) that correspond to bounded model-checking formulations for timed automata.

Working with discrete and continuous systems, being exposed to control, verification, scheduling and other domains, one cannot but observe that many problems treated under different names within different disciplines, have more resemblance if we look at them through an appropriate abstraction that filters their domain-specific details.
Among these problems and techniques we mention the algorithmic approach to discrete systems verification by forward or backward fixpoint computation, the derived reachability algorithms for continuous and hybrid systems, bounded model checking (using satisfiability solvers to verify correctness for a bounded horizon), computational techniques for optimal control such as dynamic programming and model-predictive control, simulation, search methods in AI and Markov decision processes.
Much of our effort during project was concerned with building a general unifying game-theoretic scheme, for which various system design and validation problems are concrete instances, most notably the problem of scheduling under uncertainty.