Periodic Reporting for period 2 - CAESAR (Integrating Safety and Cybersecurity through Stochastic Model Checking)
Periodo di rendicontazione: 2021-12-01 al 2023-05-31
The goal of the CAESAR project is will develop an effective framework for the joint analysis of safety and security risks. In particular, the project will work on solutions for on three important challenges that faced by the successful integration of safety and security faces three challenges:
- The complex interaction between safety and security, mapping how vulnerabilities and failures propagate through a system and lead to disruptions
- Efficient algorithms to compute system-level risk metrics, such as the likelihood and expected damage of disruptions. Such metrics are pivotal to prioritize risks and mitigate them via appropriate countermeasures
- Proper risk quantification methods. Numbers are crucial to devise cost-effective counter-measures. Yet, objective numbers on safety and (especially) security risks are notoriously hard to obtain.
The CAESAR project will address these challenges by novel combinations of mathematical game theory, stochastic model checking and the Bayesian, fuzzy, and Dempster-Schafer frameworks for uncertainty reasoning. Key outcomes are:
- An effective framework for joint safety-security analysis
- Scalable algorithms and diagnosis methods to compute safety-security risk metrics
- Stochastic model checking in the presence of uncertainty CAESAR aims to not only yield breakthroughs in safety-security analysis, but also for quantitative analyses in other domains. It will make decision making on safety-security easier, more systematic and more transparent.
We performed a COVID-19 related case study to demonstrate the applicability of Boolean Fault tree Logic, see WP4 (WP1.4). Furthermore, in our survey of joint safety-security interactions in WP2 we give an overview of earlier case studies (WP1.1). We are currently investigating the interest of Holland Datacenters for collaboration on a case study.
WP2 (Modelling safety-security integration): We performed to surveys on the state-of-the-art in safety and security modelling (WP2.1).
Two prominent modelling techniques are fault trees (FTs, for safety) and attack trees (ATs, for security). First, we compared the formalisms underlying FTs and ATs, allowing us to compare extensions and point out research gaps. Second, we surveyed the literature on formalisms for joint safety-security analysis, showing that most approaches are based on combining ATs and FTs, but the exact nature of safety-security-interaction is still ill-understood and large case studies are still missing. Those findings are an excellent starting point for further research in CAESAR.
WP3 (Markov models with fixed delays):
Research on Markov models has not started so far. Instead, research has focused mostly on AT and FT models, as these are the main tools for modelling safety-security integration in the existing literature.
WP4 (Risk quantification):
We studied risk quantification through 3 major approaches. First, we developed Boolean FT Logic (BFL) to reason about FTs, and presented model-checking algorithms capable of processing BFL queries. We applied this to a COVID-19-related case study. Second, we develop new methods to efficiently calculate AT metrics. We apply this to create a generic framework to reason about uncertainty, and metric tradeoffs, in security settings. Third, in a series of papers we extend the state-of-the-art in reinforcement learning to model complex scenarios, which can be applied in security settings.
Our plan is to investigate additional case studies, eg. with Holland data centers and/or Electricite de France. We are also modeling safety-security interactions via Bayesian Networks
WP2
We will extend our Boolean Fault Tree logic with probability. Further, we will also develop a pattern-based language that facilitates risk engineers to state fault tree properties by filling in templates
WP3
An important goal with be to develop model checking algorithms for the class of MADD models.
WL4
We will further extend the reinforcement learning techniques to handle uncertainty in attack-fault trees and other safety-security formalisms. In particular, we will investigate the scenario optimization approach to quantify uncertainty.