Community Research and Development Information Service - CORDIS

Final Report Summary - STREST (Harmonized approach to stress tests for critical infrastructures against natural hazards)

Executive Summary:
This report summarises the research results of the STREST FP7 project and its impact on the application of stress tests for non-nuclear critical infrastructures (CIs). It answers to the European Programme for Critical Infrastructure Protection and the Internal Security Strategy to develop guidelines for multi-hazard disaster management, aiming to improve the protection of European and national CIs and the resilience of society to natural and man-made disasters. Furthermore, STREST takes into account the requirements prescribed by the Directive for the reduction of the consequences of accidents involving dangerous substances. Improved effectiveness of systems for preventing, preparing for and responding to natural and man-made disasters is also the aim of the European Union Civil Protection Mechanism. At the global level, the substantial reduction of disaster damage to CIs and disruption of basic services is one the 7 targets of the Sendai Framework for disaster risk reduction. This makes the STREST project a key component to prepare for potential European Union policy changes in the areas of infrastructure, disaster risk reduction and societal resilience. Recent events have confirmed the potential for the catastrophic impact of natural hazards on CIs, with consequences ranging from health impacts and environmental degradation to major economic losses, with a major role of cascading effects on risk.
STREST developed innovative hazard models to include in stress tests of CIs to tackle the problem of extreme events, with focus on earthquakes, floods (tsunamis, dam failures) and domino effects (Natech, system failures). Earthquake models considered epistemic uncertainties, earthquake rupture directivity, cascading and clustering, spatial correlations, site/geotechnical effects, and permanent ground displacement. Inter-hazard interactions included flooding from dam failure, tsunamis due to earthquakes, and industrial accidents due to both earthquakes and tsunamis. Probabilistic risk models and tools are essential for analysing and quantifying the consequences of civil infrastructure damage due to extreme natural events. Yet, they are not widely used in risk management of non-nuclear CIs. The project filled this gap by producing fragility functions for components of petrochemical plants, dams, harbours, gas/oil distribution networks (e.g. storage tanks, cranes, pipelines) and common industrial buildings with respect to earthquakes, floods and tsunamis, and demonstrating how these component fragilities can be integrated at the system level. The interdependencies within a CI and possible cascading failures may have an important impact on the society (public safety and higher-level societal functions) beyond the civil infrastructure itself, as observed in past events and demonstrated in the STREST exploratory applications. Societal resilience definition, models, probabilistic assessment and acceptance criteria remain as yet in the research domain. The engineering risk-based multi-level stress test methodology developed by STREST enhances the evaluation of the risk exposure of CIs against natural hazards. In order to account for the diversity of CIs, the wide range of potential consequences of failure, the types of hazards and the available human and financial resources, each stress test level is characterised by a different scope (component or system) and by a different complexity of the risk analysis. The outcome of stress tests is a grade convening where the posed risk lies in relation to pre-determined risk acceptance criteria (from AA/A – pass to C - fail). The grading system is based on different hazard and risk metrics and is independent of the class of the infrastructure and/or of the underlying hazard and risk drivers, enabling decision-making on cost-effective mitigation measures across diverse CIs. A petrochemical plant (Italy), a hydropower dam (Switzerland), hydrocarbon pipelines (Turkey), a gas storage and distribution network (Netherlands), a harbour (Greece) and an industrial district (Italy) were selected for exploratory applications of the stress test methodology, which illustrated how the developed tools were able to identify extremes and disaggregate risks to specific scenarios of hazard and component failures.

Project Context and Objectives:
Critical infrastructures (CIs) are the backbone of modern society and provide many essential goods and services, e.g. electrical power, telecommunications, water, etc. As so, they have been highly integrated and intertwined. Such growing interdependencies make our complex evolving society more vulnerable to natural hazards. Recent events, such as the 2011 Fukushima disaster, have shown that cascading failures of CIs have the potential for multi-infrastructure collapse and widespread socioeconomic consequences. Moving towards a safer and more resilient society requires i) improved and standardised tools for hazard and risk assessment, in particular for low-probability high-consequences events (so-called extreme events), and ii) a systematic application of these new procedures to whole classes of CIs. Among the most important tools to accomplish this are the stress tests, designed to evaluate the vulnerability and resilience of CIs to extreme conditions. Following the stress tests recently performed for the European nuclear power plants, it is urgent to carry out appropriate stress tests for all other CI classes.

The European Programme for Critical Infrastructure Protection adopts an all-hazards approach with the general objective of improving the protection of CIs in the European Union. The planned actions include the collection of best practices, risk assessment tools and methodologies, studies concerning interdependencies, identification and reduction of vulnerabilities. Besides, increasing Europe’s resilience to natural and man-made disasters is among the strategic objectives of the Internal Security Strategy, which asks for the development of guidelines for all-hazards disaster management and the establishment of a risk management policy. In this perspective, the Directive on the identification and designation of European critical infrastructures aims to improve these infrastructures to better protect the safety, and fulfil the needs, of citizens. For each European CI, an operator security plan must be put in place and reviewed regularly. Member States are to report every two years on the risks, threats and vulnerabilities the different European CI sectors are facing.
The ‘Seveso’ Directive, on the other hand, lays down rules for the prevention of accidents involving dangerous substances and the limitation of their consequences for human health and the environment. Operators are requested to produce and regularly update safety reports, which include, inter alia, the identification and analysis of risks, as well as measures to limit the consequences of a major accident. Aiming at the reduction of the adverse consequences for human health, the environment, cultural heritage and economic activity associated with floods, the Floods Directive requires the development of flood hazard and risk maps and of risk management plans. Lastly, the Union Civil Protection Mechanism aims to improve the effectiveness of systems for preventing, preparing for and responding to natural and man-made disasters. The specific common objectives are to: i) achieve a high level of protection against all kinds of natural and man-made disasters; ii) enhance preparedness to respond to disasters; iii) facilitate rapid and efficient response; and iv) increase public awareness and preparedness for disasters. At the global level, the substantial reduction of disaster damage to CIs and disruption of basic services is one of the seven targets of the Sendai Framework for disaster risk reduction. Besides, the STREST project contributes to the development of sustainable and resilient infrastructures, both regional and transnational, which is a specific target of the UN Sustainable Development Goal 9 ‘Build resilient infrastructure, promote sustainable industrialization and foster innovation’.

CIs are vulnerable to natural hazards. Retrospective analyses of selected major industrial accident databases showed that 2 to 5 % of reported accidents with hazardous materials releases, fires or explosions were caused by natural hazards. More specifically, these analyses identified 79 records of accidents triggered by earthquakes in the 1930-2007 period and 272 records of accidents triggered by flooding in the 1960-2007 period. Modern strategies to reduce vulnerabilities and increase the resilience, adaptive capacity and efficiency of CIs – as well as the provision of related analytical instruments – have to follow an integrative approach. However, CIs are usually engineered and operated in an isolated manner and insufficient attention has been devoted to the interdependencies between them, as well as to the interplay with their social and economic environment. Therefore, little is known about how to model and eventually improve their resilience. This requires a profound systemic understanding of the intertwined infrastructures and their collective performance. Previous research projects and studies advanced the knowledge in seismic, tsunami, permanent ground displacement, induced seismicity and flood hazard assessment, considering concatenated events and geographically extended areas. The STREST project targeted specific knowledge gaps identified in recent disciplinary hazard studies with the goal of harmonizing hazard assessment conducted at different scales (local and regional) and for different natural hazards initiators, including potential extreme events.
The vulnerability and risk assessment within the framework of performance-based earthquake engineering has received a great deal of research attention in recent years, especially for buildings. STREST addressed the need to develop vulnerability and loss models for CIs considering multiple hazards and cascading effects. The engineering risk-based multi-level stress test methodology for non-nuclear CIs was developed by STREST. In order to account for the diversity of CIs, the wide range of potential consequences of failure, the types of hazards and the available human and financial resources, each stress test level is characterized by a different scope (component or system) and by a different complexity of the risk analysis. The outcome of the stress test must be a grade convening where the risk posed to the CI is with respect to pre-determined risk acceptance criteria. Fundamental research is needed to include resilience aspects in stress tests for CIs. STREST developed a conceptual framework to address the resilience of infrastructures, defined quantitative resilience metrics and proposed a method to assess them.
At the European level, the state of the art for stress tests is defined by the post-Fukushima stress tests for nuclear power plants and by the Seveso Directive for major-accident hazards involving dangerous substances. The STREST project advanced the state of the art by proposing a multi-level stress test methodology and framework, built on a harmonized approach to hazard and vulnerability assessment and quantification.
The consistent design of stress tests and their application to specific infrastructures, to classes of infrastructures as well as to whole systems of interconnected infrastructures, is a first step required to verify the safety and resilience of individual components as well as of whole systems. Obtaining such knowledge by carrying out appropriate stress tests for all classes of CIs is a clear goal and an urgent need for Europe. STREST followed five overarching objectives, aiming to generally improve the state of knowledge and to provide the basis for future European Union policies for the systematic implementation of stress tests for non-nuclear CIs. The STREST objectives can be summarized as follows:
1. Establish a common and consistent taxonomy of CIs, their risk profiles and their interdependencies, with respect to the resilience to natural hazard events;
2. Develop a rigorous common methodology and a consistent modelling approach to hazard, vulnerability, risk and resilience assessment of low-probability high-consequence (i.e., extreme) events used to define stress tests;
3. Design a stress test methodology and framework, including a grading system (A – pass to C – fail), and apply it to assess the vulnerability and resilience of individual CIs as well as to address the first level of interdependencies among CIs from local and regional perspectives;
4. Work with key European CIs to apply and test the developed stress test framework and models to specific real infrastructures chosen to typify general classes of CIs;
5. Develop standardised protocols and operational guidelines for stress tests, disseminate the findings of STREST, and facilitate their implementation in practice.
To do so, CIs were first categorized into three different classes: (A) individual, single-site infrastructures with high risk and potential for high local impact and regional or global consequences; (B) distributed and/or geographically-extended infrastructures with potentially high economic and environmental impact; and (C) distributed, multiple-site infrastructures with low individual impact but large collective impact or dependencies. The STREST project aimed to produce fundamental knowledge beyond the state-of-the-art in hazard, vulnerability, risk and resilience assessment of non-nuclear CIs and systems of infrastructures for extreme events, in particular by developing innovative probabilistic models, often simulation-based, to take into account the complexity of hazard and risk processes, including cascading events, and by offering engineering tools that are directly applicable by the industry.

Project Results:
Stress test results of nuclear facilities clearly indicate that particular attention needs to be paid to periodic safety reviews, including the re-assessment of hazards. The European and international authorities (Western European Nuclear Regulators Association - WENRA and International Atomic Energy Agency – IAEA, respectively) promote the use of probabilistic methods for seismic and flood hazard assessment, in order to define the design ground motion and flood parameters. However, the design and operation of each plant must be able to deal also with unforeseen hazards (e.g. earthquake, flooding, extreme weather and accidents), which were not considered in the original design. The review of stress tests, made in the project, on nuclear facilities indicates that further efforts are required towards the harmonisation across the European countries of the methods for the identification of natural hazards for critical infrastructures (CIs) and for the safety assessment in case of beyond-design (cliff-edge) events, considering common cause failures for multiple unit sites and multiple sites. Measures that are implemented in nuclear facilities and may be used in non-nuclear CI include:
1. Bunkered system or hardened safety core, designed to resist anticipated external events and equipped with all components necessary to provide power and cooling capacities in case of failure of the primary safety systems;
2. The ‘dry site’ concept for the plant layout, as a defence against flooding;
3. Active tsunami warning system at coastal sites, coupled with the provision for immediate operator action;
4. Seismic monitoring systems for warning.
The topics of risk and hazard are introduced in national provisions in European countries, principally in relation to the use, storage and transport of dangerous substances under the Seveso Directive and in a number of countries also with respect to the protection of CIs. Overall, the turn from ‘absolute safety’, which is unreachable, to the more realistic concept of ‘risk awareness’ has been made. State-of-the-art guidelines are available, which have to be considered by governments and operators, and provide quantitative, semi-quantitative and qualitative concepts for risk assessment. Where methods are prescribed, they comprise the following steps:
1. Identify the hazards;
2. Identify the threat or cause that might release the hazards, e.g. industrial
accident, natural or man-made disaster, and assess its likelihood;
3. Assess the extent of damages in terms of casualties, disruption of service, economic losses, etc.;
4. Evaluate the scenarios, considering the likelihood of occurrence and the severity of the consequences, and the need to implement any measures;
5. Decide which actions should be taken to cope with the risk.

Natural hazards are partly covered by the Eurocodes, which prescribe actions on structures and rules for structural design. Regarding earthquakes in particular, seismic hazard is defined in the National Annexes to Eurocode 8, while the SHARE project represents a trend-setting approach on the seismic hazard harmonization in Europe. The harmonized approach for stress tests that is developed by the STREST project and is comparable to the Eurocodes in the building industry, will significantly increase public awareness of risks related to CIs in the European Union and will provide tools for risk assessment.
Recent events have highlighted the potential for catastrophic natural-hazards to affect CIs, with consequences ranging from health impacts and environmental degradation to major economic losses due to damage to assets and business interruption. For major earthquakes, floods and tsunamis, there is a high risk of multiple and simultaneous impacts at a single infrastructure or to several infrastructures over a potentially large area. The STREST review also highlighted the major risk of cascading effects, such as the release and dispersion of flammable substances and the reduction of production due to impacts at suppliers of raw materials or because products cannot be delivered where major transport hubs are affected by the natural hazard. Although the ripple effects on the economy may reach global proportions, resulting in a shortage of raw materials or intermediate products in the manufacturing industry and causing price hikes, the vulnerability introduced into infrastructure systems by interconnectedness is not routinely assessed. Besides, emergency response in case of large-scale natural- hazard impact usually suffers from a competition for scarce response resources, where the highest priority is given to preserving human life and recovering essential infrastructures. The severity of some of the analysed natural hazards was unexpected but not impossible to foresee, at least in most of cases. Nevertheless, they caused a significant amount of damage and infrastructure service outage. This indicates an underestimation of the risk that resulted in the settlement of natural-hazard prone areas, insufficient design, not adequately updated hazard analyses and a lack of preparedness. In order to avoid future disasters, the vulnerability of key CIs to natural hazards and the consequences of impact should be determined. Furthermore, it is essential to implement a scheme of regular updating (e.g. every five or ten years) of hazard and risk analyses. Risk analysis is required to understand system weaknesses and to prioritise prevention and mitigation measures. This should be coupled with a cost-benefit analysis (for more details, see Deliverables D2.x).

Extreme hazardous events can be considered as the consequence of three different processes (Mignan et al., 2017): (i) they can emerge naturally from randomness. Those are events that occur by ’lack of chance’ and populate the tail of statistical distributions; (ii) extremes can be due to physical processes that amplify their severity, for example domino effects; (iii) they finally can be due to site-specific conditions that again amplify severity locally. These three general processes, which can be intertwined, have been considered in STREST by investigating the following themes in respect to earthquakes, floods and tsunamis (Fig. 1):
The lack of available data on extreme events requires a full exploration of the epistemic uncertainties. EU@STREST is a coherent process to ensure an improved, standardized and robust management of this uncertainty within a project aimed to perform a stress test. The process deals with the uncertainty emerging from the hazard selection, the implementation of alternative models and the exploration of the tails of distributions. It also takes into account the different views and opinions of the involved experts and the potential budget limitation of stress tests for non-nuclear CIs (see more on this in the ST@STREST description). EU@STREST defines a general framework for the assessment of these uncertainties in order to increase the reliability of stress test results. The treatment and quantification is usually performed by means of well-known methods such as Logic Trees (LT) and Bayesian/Ensemble Approach (BEA) (see Marzocchi et al. (2015), Selva et al. (2016) and Darcourt et al. (2016) for the seismic, tsunami and flood cases, respectively). It is however important that these results do not depend on specific subjective choices of the practitioner performing the assessment. In order to avoid an a priori control of the results, it is required that a minimum level of involvement of multiple experts is guaranteed both in setting up the methodological framework of the study and in performing the calculations. The quantification of epistemic uncertainties should not be dependent on a specific analyst. Due to budget limitations, the inclusion of a very large number of experts is generally not conceivable. Based on different expert judgment techniques (Classical Expert Elicitation – cEE – and Multiple Expert Integration – MEI), the process guarantees the minimum required level of involvement of multiple experts from the community and accounting for this economical limitation of stress tests. EU@STREST follows the state-of-the-art methodological and procedural guidance from the European Nuclear Safety Regulation Group (ENSREG) and the IAEA. The participants playing a core role in the process are: (i) the Technical Integrator team (TI), (ii) the Review Panel (RP), and (iii) the Panel of Experts (PoE). With the goals of transparency, independence between the participants and responsibility during the stress tests, the process is divided in four main stages: Phase 0: Preparation, Phase 1: Evaluation, Phase 2: Integration, and Phase 3: Finalization. Guidelines, forms and questionnaires have been made available (see Deliverable D 3.1). In terms of the selection of hazards and hazardous phenomena to be included in the analysis, regulatory concerns and available sources in the nuclear sector might allow considering the treatment of most of the hazards, but it might not be possible in the non-nuclear sector due to low regulatory concerns and available funding. It is then imperative to prioritize the natural hazards of interest. In this regard, a multi-hazard and multi-risk assessment method is presented in STREST, built upon results from the MATRIX project and developed further here.
Many examples of hazard interactions and their impact on CIs exist through history that must be considered in risk assessment following the Post-Fukushima recommendations, as shown in the STREST review above. In order to analyse potential hazard cascading, the generic multi-risk framework (genMR), proposed in the scope of the New Multi-Hazard and Multi-Risk Assessment Methods for Europe (MATRIX) project, was further developed in STREST (Matos et al., 2015; Mignan et al., 2016; in press – for domino effects in network systems, the SYNER-G project approach was additionally employed in STREST; see further below). The aim of genMR is to help better understanding the different aspects of multi-hazard and multi-risk, to define a common terminology and to guide the integration of knowledge from various types of models into the same framework. The methodology is based on the sequential Monte Carlo method and on a variant of a Markov chain to simulate cascading event scenarios. In this project focus was made on three types of hazard interactions: (1) “intra-event” earthquake triggering to evaluate the maximum magnitude Mmax of cascading fault ruptures (Mignan et al., 2015), (2) “intra-hazard” earthquake triggering to evaluate earthquake spatio-temporal clustering (i.e., large aftershocks) (Mignan et al., in press) and (3) various “inter-hazard” interactions at dams (impact of earthquakes, floods, internal erosion, and malfunctions on dam and foundation, spillway, bottom outlet and hydropower system) (Matos et al., 2015). The Hazard Correlation Matrix, part of the GenMR framework to estimate event correlations, can be used also as a form/questionnaire to generate multi-hazard scenarios (Mignan et al., 2016), easily transportable to the STREST stress test method ST@STREST. Overall, all of the proposed and tested multi-hazard models showed that considering such aspects lead to more extreme events. Fault rupture cascading yields an increase of Mmax, not considered for example in SHARE. Earthquake clustering leads to a migration of the hazard (and thus risk) to lower probabilities but higher consequences. Finally the multi-hazard topology of a dam interface increases the probability of failure of the dam (see details in Deliverable D3.5; see also the risk section below).
Specifically to earthquakes: Near the source of earthquakes (relative to the rupture’s size), the seismic demand can be systematically different and larger than that of so-called ordinary records, which accordingly affect the structural response of constructions. These phenomena are generally called as near-source (NS) effects. NS seismic effects include, among others, forward-directivity. This effect is a constructive interference of waves that delivers in preferential directions most of the seismic energy in a single pulse-like ground motions at low frequency, which is very detrimental for structures. If critical structures are close to active faults, a particular attention is required due to these NS effects. In NS conditions both ground motion and seismic structural response may show systematic spatial variability, which classical Probabilistic Seismic Hazard Assessment (PSHA) is not able to explicitly capture. The STREST project presents a framework and new guidelines for taking forward-directivity into account in PSHA (i.e., NS-PSHA) and non-linear static procedures with respect to the inelastic demand associated with forward-directivity. In this context, a method was presented for the implementation of the Displacement Coefficient Method (DCM) towards estimating NS seismic demand, by making use of the results of NS-PSHA and a semi-empirical equation for NS-FD inelastic displacement ratio. Application of the proposed approach showed that forward-directivity could have an important impact on near-source structural demand, which corroborates the need of this analysis for CIs located near active seismic faults (Baltzopoulos et al., 2015). Near-fault ground-motion databases are however still rather poor. Because of this lack of data, the understanding of near fault shaking effects (e.g. hanging wall effects, high frequency directivity effects) needs to be improved. New methods used to evaluate these near-fault effects will then be developed and it is recommended to carefully take into account these future new developments. Most of the new methods will likely be implemented within the OpenQuake software (for more details, see Deliverable D3.3).
When performing stress tests and seismic hazard analyses for distributed and/or geographically extended infrastructures or lifeline systems, several particular aspects of the seismic ground motion behaviour need to be accounted for. For these infrastructures types, the consideration of site-to-site variation (spatial correlation) in dynamic ground motion intensity measures (GMIMs) (e.g., PGA, Sa) is important for realistic probabilistic seismic hazard and risk assessment. The interdependency between the GMIMs (cross-correlation) is also relevant for such structural systems because the vulnerability of some of their components is sensitive to the conditional occurrence of multiple GMIMs. In addition to these two phenomena, the proper amplitude estimations of static (permanent fault displacement) and dynamic GMIMs are crucial for geographically distributed buildings or geographically extended lifelines located in the close proximity to fault segments. Monte Carlo (MC) simulation techniques have been developed in STREST for incorporation into probabilistic hazard and risk calculations as an alternative to conventional PSHA. Such techniques provide added flexibility, transparency and robustness to the consideration of aforementioned physical models. The proposed method uses the multi-scale random fields (MSRFs) technique to incorporate spatial correlation and near-fault directivity while generating MC simulations to assess the probabilistic seismic hazard of dynamic GMIMs. In addition, the implementation of MC simulations for permanent fault displacement hazard was done to account for surface rupture, mapping accuracy and occurrence probabilities of on- and off-fault displacements. These steps are implemented via a suite of codes developed on the MATLAB platform (Akkar and Cheng, 2015; Cheng and Akkar, in press). The spatial variability of ground motion assessment has been implemented in the open-source code for probabilistic seismic hazard and risk analysis OpenQuake-engine (ePSHA workflow). In parallel, closed-form solutions for multi-site probabilistic seismic hazard analysis were developed and probabilistically rigorous insights into the form of dependence among hazards at multiple sites were derived (Giorgio and Iervolino, 2016) (see Deliverable D3.2 for more details).
Seismic site effects are related to the modification of seismic waves (e.g., amplitudes, durations) in the superficial layers due to local geological or topographical conditions. These variations can strongly influence the nature and severity of shaking at a given site (Smerzini et al., 2016). It is therefore essential to assess these local effects for every CI since the damages due to an earthquake may be locally aggravated. The degree of complexity (and associated necessary funding) of available site effect evaluation methods is however highly variable. STREST presents different approaches and guidelines for the consideration of site effects with an increasing level of detail and complexity.
Complexity levels 0 or 0.5 correspond to generic or partially site-specific methodologies where the site effect is taken into account by proxy and correction factors based on the direct use of the site amplification defined within the Ground Motion Prediction Equations (GMPE) (usually Vs30) or a posteriori modification of the site term using Site Amplification Prediction Equations (SAPE). Generic simplified approaches are usually employed in regional hazard assessments and are then not recommended for CI hazard estimation. The STREST results show that the main drawback of these approaches, from a safety concern, is the risk of severely underestimating the specific amplification of the site under study. For Levels 1 or 2, the whole amplification complexity is studied in the hazard definition. They are based on a complete consideration of the local site response and relative uncertainties. These approaches therefore require a detailed characterisation of sites as well as host-to-target adjustments. They may be based on an instrumental approach where seismological instrumentation on the site and its vicinity measures and records ground motions from “real” earthquakes, allowing the implementation of empirical models. The amplification or the resulting site-specific ground motion can also be assessed through a numerical simulation of the wave propagation phenomena occurring in the site. A linear simulation is recommended for the simpler local geology and moderate seismic activity, but numerical simulations allow the consideration of “extreme” cases going much beyond the soil linear behaviour. For those cases, the use of non-linear simulation is the only way to estimate the modifications of site response linked with soil non-linearity. Of course, the level of complexity of characterization/instrumentation depends on the choice of site effect evaluation method, but characterization/instrumentation is also mandatory to get the minimum information to inform the choice of site evaluation method itself, the whole process is therefore iterative. In-situ instrumentation provides an important feedback (see more details in Deliverable D3.4).
Site-specific effects were also considered for the case of tsunamis. A site-specific Probabilistic Tsunami Hazard Assessment (PTHA) involves a very heavy computational effort since it encompasses the production of a full source-to-site numerical tsunami simulation on a high-resolution digital elevation model for each and every potential source scenario considered. In the case of earthquake-induced tsunamis (SPTHA) the computational burden is heavily increased since both local and distant sources, as well as the full aleatory variability of the seismic source, must be taken into account. At the same time, the analysis of the epistemic uncertainties becomes critical.
The STREST developments include a refined methodology to reduce the computational cost, which allows a full quantification of epistemic uncertainties. The procedure, described in Lorito et al. (2015) in the parallel ASTARTE project, allows a significant and consistent reduction of the epistemic uncertainty associated to probabilistic inundation maps, as it balances between the completeness of the earthquake model and the computational feasibility. It allows in fact performing high-resolution inundation simulations on realistic topo-bathymetry only for the relevant seismic sources (Selva et al., 2016; see also details in Deliverable D3.4).
A last hazard considered in STREST is induced seismicity, as the increased occurrence of induced seismicity has raised public concern with earthquakes now occurring in regions where little or no seismicity was originally expected. In those regions, the building stock is usually more vulnerable, since no earthquake design rules were to be applied. This seismicity, due to a wide range of anthropogenic activities such as fluid injection and extraction, hydraulic fracturing and mining, can have an important impact on the built environment (e.g. Groningen gas region considered in STREST). Within the project, the open-source code for probabilistic seismic hazard and risk analysis OpenQuake-Engine has been adapted for application to induced seismicity hazard. The work adapts OpenQuake to produce a MC-based probabilistic seismic hazard assessment in which the rate, location and magnitude of the earthquakes vary in response to a dynamically changing pressure field, by adopting published geo-mechanical earthquake seed models. Within these adaptations, some were largely centred upon the implementation of several new ground-motion models, developed specifically for induced seismicity applications and into adapting the engine for a geo-mechanical seed model seismic hazard calculation (see Deliverable D3.6).
In summary, due to the large regional or even global socioeconomics impacts that could potentially derive from damage to CIs, the hazard assessment of low-probability-high-consequences events, to be considered in the risk analysis of these structures (stress tests), needs to go beyond a classical probabilistic hazard assessment. These studies increase in complexity and involve a somehow large team of experts. The detailed evaluation of epistemic uncertainties becomes also fundamental for the validation and coherency of the results (EU@STREST method). At the same time, these analyses need to be simpler, cheaper and less time consuming than the stress tests prepared for the nuclear industry. In addition to the different aspects presented above (hazard interactions, near-fault and site-specific effects, spatial correlations) that should be considered in the stress test hazard models, another recommendation made by STREST is to consider high-level, validated and open-source softwares. Some of the models proposed in STREST have been implemented in OpenQuake for instance, which should help hazard experts involved in future stress tests to have a set of state-of-the-art read-made models for direct use (see detailed list of recommendations in Deliverable D3.7).

At the risk level, the main attempt of STREST was to treat different CIs in a homogeneous framework derived and adapted from the well-known performance-based engineering developed for individual structures. From this effort it may be concluded that it seems possible to apply a unique logical framework to different CIs exposed to different natural hazards:
STREST first developed a taxonomy of CIs, and despite the clear differences between the different CI types considered (refinery, dam, pipelines, gas network, harbour, industrial district), they all share similar elements that are exposed to risk. In many cases, they include components from different systems, interacting to ensure the supply of the CIs’ products and/or services. The STREST taxonomy describes with a common language the main components that are present in a variety of systems (e.g., hydropower systems, electric power systems, waterfront components, industrial warehouses, waste-water systems, etc.). It builds upon the taxonomy developed in the SYNER-G project and classifies a large number of individual components that can be found within different systems, such that each CI can be described in a harmonized way (e.g., appurtenant structures, backup power, bridges, building contents, gas pipelines, pumping station, refinery process components, road pavements, etc.). Some elements (such as pumping stations or cranes) can be comprehensively described with a list of generic typologies, and sometimes this can be further expanded using some additional information that can be described using the classification parameters. Other elements (such as buildings and pipelines) instead have a very large number of potential typologies. In this case generic typologies are not available and a classification system based on the classification parameters is required so that ad-hoc typologies can be produced (see details in Deliverable D4.4).
Prior to assessing the performance and potential loss of non-nuclear CIs, the fragility/vulnerability characteristics of each component defined in the STREST taxonomy, and the intensity measure types needed to describe the hazards to which they were exposed, were identified. This information was collected through vulnerability factsheets, including e.g. the availability or lack of vulnerability models, a list of parameters controlling structural response, whether or not the structure deteriorates in multiple events, list of hazard metrics, etc. Standardised procedures were then developed for the consequence analysis of the different CIs considered in the project (see site application results below – also details per CI class in Deliverables D4.1-3). Structural vulnerability functions for all elements at risk (such as storage tanks and pipelines; body and foundation, spillway and hydropower system in dams; and buildings and cranes in harbours) were defined with respect to earthquakes, tsunamis and floods (Fig. 2). Fragility functions for non-structural components and contents were also developed (Babic and Dolsek, 2016; Casotto et al., 2015; Lanzano et al., 2015; Uckan et al., 2015; Karafagka et al., 2016; Miraglia et al., 2015).
STREST focused on loss propagation and cascading effects, particularly relevant for interconnected CIs. A survey of multiple dependencies in CIs considering cascading failures and losses as well as of availability assessment for supply-chain-like systems was made. The approach used in STREST followed the one developed in SYNER-G. The most dependencies have been found in the hydrocarbon pipeline system in Turkey, the harbour of Thessaloniki and the Gasunie national gas storage and distribution network in the Netherlands, where about 100 dependencies have been recognized for each CI. For the large dams in Switzerland and the oil refinery and petrochemical plant in Milazzo, tens of dependencies have been provided, while the least dependencies (less than ten) were defined in the industrial district in Italy. However, the “dependency index” which was defined as the ratio between the number of assets and the total number of dependencies in each CI, showed that the most dependent assets are in the industrial district, then followed by the refinery and petrochemical plant, hydropower dam, hydrocarbon pipeline system, harbour and gas network. This is related to the way in which each CI is working, the kind and number of different operations performed, as well as the number of components available to perform one task, e.g. the existence of redundant components minimizes the “dependency index”. Simulations of the interconnections followed the existing SYNER-G or GenMR approaches (e.g., Fig. 3 – see also Deliverable D4.2; Matos et al., 2015; Pitilakis et al., 2016). In addition to network-type interactions, successive hazardous events may have a dynamic effect on building response. STREST provided structural methods for probabilistic performance assessment in the case of state-dependent seismic damage accumulation. This was developed based on Markov chains or on its variants (Iervolino et al., 2015; Mignan et al., in press; Trevlopoulos and Gueguen, 2016). In simple terms, risk increases as additional physical processes are considered, such as event clustering and dynamic vulnerability. One of the models developed was integrated and tested on the GenMR framework that was also used to test other cascading phenomena (e.g. multi-risk at dams).
STREST also reviewed all aspects of societal resilience in the literature and then proposed a compositional demand/supply resilience quantification framework to evaluate the post-disaster resilience of CI systems that supply their services to satisfy the demand of a community. The framework allows accounting explicitly for the evolution of the supply provided by the analysed CI and the evolution of demand from the community and other CI systems during the post-disaster recovery process. Several output scenarios were produced including Pompeii/Fukushima-like and Port-of-Kobe-like examples of recovery (or lack of it). The approach was verified to be consistent with the proposed stress test method (for details, see Deliverable D4.5).

The aims of the proposed ST@STREST methodology are to assess the performance of individual components as well as of whole CI systems with respect to extreme events, and to compare this response to acceptable values (performance objectives) that are specified at the beginning of the stress test. It is based on probabilistic and quantitative methods for best-possible characterization of extreme scenarios and consequences. Further, it is important to note that CIs cannot be tested using only one approach: they differ in the potential consequence of failure, the types of hazards, and the available resources for conducting the stress tests. Therefore, multiple stress test levels are proposed. Each Stress Test Level (ST-L) is characterized by different focus (component or system) and by different levels of risk analysis complexity (starting from design code application and ending with state-of-the-art probabilistic risk analyses, such as cascade modelling). The selection of the appropriate Stress Test Level depends on regulatory requirements, based on the importance of the CI, and the available human/financial resources to perform the stress test. In order to ensure transparency of the proposed ST@STREST process, a description of the assumptions made to identify the hazard and to model the risk (consequences) and the associated frequencies is required. The data, models and methods adopted for the risk assessment and the associated uncertainties are clearly documented and managed by different experts involved in the stress test process, following the EU@STREST process for managing the multiple-expert integration. This allows defining how reliable the results of the stress test are (i.e. level of detail and sophistication of the stress test). Different experts are engaged in the implementation of stress test process and different roles and responsibilities are assigned to different actors. The size of such groups depends on the selected ST-Level. The workflow of ST@STREST comprises four phases (Fig. 4): Pre-Assessment phase; Assessment phase; Decision phase; and Report phase (the details given below are further developed in Deliverable D5.1; see also Esposito et al., 2017).
The involvement of multiple experts is critical in a risk assessment endeavour when potential controversies exist and the regulatory concerns are relatively high. In order to produce robust and stable results, the integration of experts plays a fundamental role in managing subjective decisions and in quantifying the epistemic uncertainty capturing the centre, the body, and the range of technical interpretations that the larger technical community would have if they were to conduct the study (SSHAC). To this end, the experts’ diverse range of views and opinions, their active involvement, and their formal feedbacks need to be organized into a structured process ensuring transparency, accountability and independency. EU@STREST (presented above in the scope of hazard uncertainties) is a formalized multiple expert integration process integrated into the ST@STREST workflow. This process guarantees the robustness of stress test results, considering the differences among CIs with respect to their criticality, complexity and ability to conduct hazard and risk analyses, manages subjective decision-making, and enables quantification of the epistemic uncertainty. With respect to the different levels in the SSHAC process developed for nuclear CIs, the proposed process is located between SSHAC levels 2 and 3 in terms of expert interaction. EU@STREST also makes an extensive use of classical Expert Elicitations, and is extended to single risk and multi-risk analyses. The core actors in the multiple expert process are the Project Manager (PM), the Technical Integrator (TI), the Evaluation Team (ET), the Pool of Experts (PoE), and the Internal Reviewers (IR). The interactions among these actors are well-defined in the process. The descriptions and the roles of these actors are defined as follows in ST@STREST:
1. Project Manager (PM): The project manager is a stakeholder who owns the problem and is responsible and accountable for the successful development of the stress test (ST). The PM is responsible that the stress test outcomes appear rational and fair to the authorities and public. The PM specifically defines all the questions that the ST should answer.
2. Technical Integrator (TI): The technical integrator is an analyst responsible and accountable for the scientific management of the project. The TI is responsible for capturing the views of the informed technical community in the form of trackable opinions and community distributions, to be implemented in the hazard and risk calculations. Thus, the TI explicitly manages the integration process.
3. Evaluation Team (ET): The evaluation team is a group of analysts that actually perform the hazard, vulnerability and risk assessments required by the ST, under the guidelines provided by the TI. The team is selected by consensus between the TI and PM, and it may be formed by internal CI resources and/or external experts.
4. Pool of Experts (PoE): The PoE has the goal of representing the larger technical community within the process. The PoE is formed only if required by the ST-Level: for most ST-Ls, the role of the PoE is covered by the TI. It Two sub-pools are foreseen, which can partially overlap: PoE-H (a pool of hazard analysts) and PoE-V (a pool of vulnerability and risk analysts). The PoE-H should have either site-specific knowledge (e.g., hazards in the area) and/or expertise on a particular methodology and/or procedure useful to the TI and the ET team in developing the expert community knowledge distribution regarding hazard assessments. The PoE-V should have expertise on the specific CI and/or on the typology of CI and/or on a particular methodology and/or procedure useful to the TI and the ET team regarding fragility and vulnerability assessments.
5. Internal Reviewers (IR): One expert or a group of experts on subject matter under review that independently peer reviews and evaluates the work done by the TI and the ET. This group provides constructive comments and recommendations during the implementation of the project. In particular, IR reviews the coherence between TI choices and PM requests, the TI selection of the PoE in terms of expertise coverage and scientific independence, the fairness of TI integration of PoE feedbacks, and the coherence between TI requests and ET implementations.

The CI authorities select the PM. The PM selects the TI and IR and, jointly with the TI, the components of the ET and of the PoE. PM and TI are, in principle, individuals. The ET and IR may involve several participants, with different background knowledge, but in specific cases may be reduced to individuals. The PoE is, by definition, a group of experts. In all cases, the size of groups depends on the purpose and the given resources of the project.
The PM interacts only with the TI and specifically defines all the questions that the project should address, taking care of the technical and societal aspects (e.g., selection of the ST level, definition of acceptable risks, etc.). The TI coordinates the scientific process leading to answer to these questions, coordinating the ET in the implementation of the analysis, organizing the interaction with the PoE (through elicitations and individual interactions), and integrating PoE and IR feedbacks into the analysis. The ET implements the analysis, following the TI choices. The IR reviews the whole process, in order to maximize the reliability of the results and to increase their robustness.
The ST@STREST workflow represents a systematic sequence of steps (processes), which have to be carried out in a stress test. The participation of the different actors significantly changes along the different phases of the ST (Fig. 4). The PM and TI are the most active participants in the ST workflow. The PM participates in all the steps of the stress test until the end (reporting of the results), while the role of TI ends at the end of the Decision phase. The TI is constantly assisted by the ET and supported by the PM, while the level of assistance depends on the ST level. The PoE (if present) participates in the Assessment and Decision phases. The IR performs a participatory review at the end of Phase 1 and 3. The final agreement, at the end of the Decision phase, is made among the PM, TI and IR. Each phase of ST@STREST is subdivided into a number of specific steps, with a total of 9 steps:
In the Pre-Assessment phase the data available on the CI and on the phenomena of interest (hazard context) are collected (step 1). Then, the goal (i.e. the risk measures, objectives and acceptance criteria – step 2), the time frame, the total costs of the stress test and the most appropriate Stress Test Level to apply are defined (step 3). Step 3 may be a long process and may differ substantially depending if the PoE is in place or not, according to the ST-L selected. The presence of the PoE allows for a robust set-up of the ST, based on the quantitative feedbacks of multiple experts.
In the Assessment phase, the stress test is performed at Component (step 4) and System Levels (step 5). The performance of each component of the CI is checked by the hazard-based assessment, design-based assessment or risk-based assessment approach. This check is performed by the TI or by one expert of the ET selected by the TI. When the stress test at the system level is performed, at first, the TI finalizes all the required models. In particular, if the PoE is in place (sub-levels c), the TI organizes the classical Expert Elicitations in order to: i) fill potential methodological gaps, ii) quantify the potential scenario for the scenario-based risk assessment (SBRA), and iii) rank the alternative models to enable the quantification of the epistemic uncertainty. The PoE performs the elicitation remotely. Open discussions among the PoE members (moderated by the TI) are foreseen only if significant disagreements emerge in the elicitation results. If the PoE is not in place but EU assessment is required (sub-level b), the TI directly assigns scores on the selected models for ranking. Then, the ET (coordinated by the TI) actually implements all the required models and performs the assessment. If specific technical problems emerge during the implementation and application, the TI may solve them through individual interactions with members of the PoE (if foreseen at the ST-Level).
The Decision phase is characterized by three steps. First, there is a risk objectives check (step 6) with a comparison to the results of the Assessment phase to the risk objectives. This task is performed by the TI, with the technical assistance of the ET. Depending on the type of risk measures and objectives defined by the PM (F-N curve, expected value, etc.) and on the level of “detail and sophistication” adopted to capture the centre and range of technical interpretations, the comparison between results from probabilistic risk assessments with these goals may differ. One possibility to assess the difference between the obtained risk measures and the adopted risk objectives using grades (e.g. AA – negligible risk, A – as low as reasonably practicable (ALARP) risk, B – possibly unjustifiable risk, C – intolerable risk – see further below) is described below. During the disaggregation/sensitivity analysis (step 7), critical events are identified. This task is performed by the ET coordinated by the TI. Critical events that most likely cause the exceedance of the considered loss value are identified through a disaggregation analysis and based on them, risk mitigation strategies and guidelines are then formulated. If specific technical problems emerge during the application, the TI may solve them through individual interactions with the PoE (if present). This step is not mandatory. It depends on the results of step 6 (risk objectives check). For example, if the outcome of step 6 is that the critical infrastructure passes the stress tests, performing step 7 may be informative, but is not required. Lastly, risk mitigation strategies and guidelines are formulated (step 8) based on the identified critical events. This task is performed by the TI, with the technical assistance of the ET. The results of all ST steps are specifically documented by the TI. The IR reviews the activities performed in assessments from step 4 to 8. The TI, with the technical assistance of the ET, updates the final assessments for such steps accounting for the review. Final assessments and decisions are documented by the TI. Based on such documents, the PM, TI and IR reach the final agreement.
In the final Reporting Phase, the results are presented to CI authorities and regulators (step 9). This presentation is organized and performed by the PM and TI. The presentation includes the outcome of the stress test in terms of the grade, the critical events, the guidelines for risk mitigation, and the level of “detail and sophistication” of the methods adopted in the stress test.
Due to the diversity of types of CIs and the potential consequence of failure of the CIs, the types of hazards and the available resources for conducting the stress tests, it is not optimal to require the most general form of the stress test for all possible situations. Therefore, three stress test variants, termed Stress Test Levels (ST-Ls) were proposed:
o Level 1 (ST-L1): single-hazard component check;
o Level 2 (ST-L2): single-hazard system-wide risk assessment;
o Level 3 (ST-L3): multi-hazard system-wide risk assessment.
Each ST-L is characterized by a different scope (component or system) and by a different complexity of the risk analysis (e.g. the consideration of multi-hazard and multi-risk events) as shown in Fig. 5. Some details are given below (more can be found in D5.1):
At Component Level Assessment only one implementation is foreseen, i.e. the ST-L1a. This level requires less knowledge and resources (financial, staff, experts) for conducting the stress test in comparison to the system level assessment, but it is obligatory because design of (most) CI components is regulated by design codes, and usually, both the data and the experts are available. Further, for some CIs, the system-level analysis (single- and multi-risk) could be overly demanding in terms of available knowledge and resources. Only the TI is required as expert contributing to critical scientific decision, while the whole process may require up to five experts to assist the TI in technical decisions. The TI selects the most important hazard to consider in the component-level analysis but, if more than one hazard is considered critical for the CI under study, more than one Level 1 check should be performed, one for each hazard. Three methods to perform the single-hazard component check are proposed in ST@STREST, and they differ for the complexity and the data needed for the computation. The possible approaches are: hazard-based assessment, design-based assessment and the risk-based assessment approach.
The system-level assessment requires more knowledge and resources for conducting the stress test compared to the Component Level Assessment. Thus, it is not made obligatory. However, the system level assessment represents the only way of revealing the paths that lead to potential unwanted consequences. Therefore, it is highly recommended. Different implementations are possible, according to:
o The consideration of a single hazard (STL-2) or of multiple-hazard/risks (STL-3).
o The quantification of epistemic uncertainty may not be performed (sub-level a).
o The use of a single expert (sub-level b) or of multiple-experts (sub-level c) to quantify the epistemic uncertainty.
For all sub-levels of the system-level assessment, probabilistic (i.e. probabilistic risk analysis, PRA) methods are foreseen. PRA is a systematic and comprehensive methodology to evaluate risks associated with every life-cycle aspect of a complex engineered entity, where the severity of consequence(s) and their likelihood of occurrence are both expressed quantitatively. It can be also found in the literature under the names of quantitative risk assessment (QRA) or probabilistic safety assessment (PSA). The final result of a PRA is a risk curve and the associated uncertainties (aleatory and epistemic). The risk curve generally represents the frequency of exceeding a consequence value as a function of that consequence values. PRA can be performed for internal initiating events (e.g. system or operator errors) as well as for external initiating events (e.g. natural hazards).
There is no standard approach for multi-risk assessment. Different methods could be used, taken from the scientific literature. Methods developed in STREST, such as damage-dependent vulnerability methods (Iervolino et al., 2016) and loss disaggregation, can easily be added to a multi-risk framework. The GenMR framework, for instance, has been shown to be flexible when including a multitude of peril. At at the same time, there is some adaptation required from the modeller to develop a multi-risk model on GenMR (i.e., all events defined in a stochastic event set, all interactions defined in a hazard correlation matrix, process memory defined from time-dependent or event-dependent variables). Whatever the method used, the final output should be a probabilistic risk result in the form of probabilities of exceeding different loss levels. The L3 loss curves shall then be compared to the loss estimates generated in stress test levels L1 and L2, and differences identified. The main cause of risk should be investigated, by disaggregation (e.g., Iervolino et al., 2016) or by GenMR time series ranking and metadata analysis (e.g., Matos et al., 2015; Mignan et al., in press).
Scenario-based analysis may be performed as complementary to ST-L2c and ST-L3c due to methodological gaps identified for specific events/hazards that cannot be formally included into the PRA. This means that it should be considered only if, for technical reasons, one important phenomenon cannot be included into a formal probabilistic framework (e.g., PRA for ST-L2c). In this case, the choice of performing a scenario-based assessment should be justified and documented by the TI, and reviewed by the IR. Different strategies can be adopted in organizing the elicitation experiment and in preparing the documentation for the PoE. For example, the Hazard Correlation Matrix (HCM), one of the main inputs to the GenMR framework, can be used qualitatively to build more or less complex scenarios of cascading hazardous events (Mignan et al., 2016). The HCM is a square matrix with trigger events defined in rows and target events (the same list of events) in columns. In ST-L3c, each cell of the HCM is defined as a conditional probability of occurrence. In a deterministic view, cells can be filled by plus “+” signs for positive interactions (triggering), minus “-” signs for negative interactions (inhibiting) and empty “Ø” signs for no known interactions (supposedly independent events). The HCM has recently been shown to be a cognitive tool that promotes transformative learning on extreme event cascading. In other words, it allows defining more or less complex scenarios from the association of simple one-to-one interacting couples.
It should be noted that the data on the components, structures and systems of the CI needs to be assembled and held in a framework to facilitate the application of the proposed stress test methodology and the execution of a stress test. The data on the CI includes not only the information about the hazard and the vulnerability of the components and structures, but also the information about the functioning of the system that includes the topology of the system, the links that describe the interactions between the components and structures, and the causal relations between the events in the system. STREST reported on how to integrate Bayesian networks (BN) in CI stress test data structures, which would consist in switching from classical risk analysis to BN analysis in the Assessment Phase of the stress test workflow. An illustrative BN-based model was used to evaluate the seismic resilience of infrastructure systems based on the compositional supply/demand resilience quantification framework (see Deliverable D5.2 for more details).
The main outcome of the stress test, obtained in step 6 (Risk Objectives Check), is described in ST@STREST using a grading system. This grading system is based on the comparison of the results of risk assessment with the risk objectives (i.e. acceptance criteria) defined at the beginning of the test in step 2 (Risk Measures and Objectives). The proposed grading system (Fig. 6) is composed of three different outcomes: Pass, Partly Pass, and Fail. The CI passes the stress test if it attains grade AA or A. The former grade corresponds to negligible risk and is expected to be the attained risk objective for new CIs, whereas the latter grade corresponds to risk being as low as reasonably practicable (ALARP) and is expected to be the attained risk objective for existing CIs. Further, the CI partly passes the stress test if it receives grade B, which corresponds to the existence of possibly unjustifiable risk. Finally, the CI fails the stress test if it is given grade C, which corresponds to an intolerable risk level. The project manager (PM) defines the boundaries between grades (i.e. the risk objectives) by following requirements of the regulators. The boundaries (i.e. the acceptance risk levels) can be expressed using scalar or continuous risk measures. Examples of the former include the annual probability of the risk measure (e.g. loss of life) and the expected value of the risk measure (e.g. expected number of fatalities per year), whereas the latter is often represented by an F-N curve, where F represents the cumulative frequency of the risk measure (N) per given period of time. In several EU countries, an F-N curve is defined as a straight line on a log-log plot. However, the parameters of these curves, as well as parameters of scalar risk objectives (i.e. regulatory boundaries in general) may differ between countries and industries. Harmonizing the risk objectives of risk measures across a range of interests on the European level remains to be done. This is a task for regulatory bodies and for industry association: they should reconcile the societal and industry interest and develop mutually acceptable risk limits. When acceptance criteria are defined as continuous measures, the grade is assigned based on the position of the farthest point of the CI loss curve from the F-N limits.
In general, the CI performance can be understood as time-variant. It may change due to, for example, ageing through use, long-term degradation process such as corrosion, effects of previous hazard events, man-made events, and change in exposure (e.g. population). Such change in performance may lead to an increase of the probability of failure or loss of functionality, or exacerbate the consequences of failure during the CI system’s lifetime. In the proposed grading system, it is foreseen that the performance of the CI and/or the performance objectives can change over time. Consequently, the outcome of the stress test is also time-variant. For this reason, the stress test is periodic, which is also accounted for by the grading system. If the CI passes a stress test (grade AA or A), the risk objectives for the next stress test do not change until the next stress test. The longest time between successive stress tests should be defined by the regulator considering the cumulative risk. However, most of existing CIs will probably obtain grade B or even C, which means that the risk is possibly unjustifiable or intolerable, respectively. In these cases, the grading system stimulates the stakeholders to upgrade the existing CI or to start planning a new CI for the following stress test cycle. It is proposed that either stricter risk objectives are enforced or that the time between the successive stress tests is reduced in order to make it possible that stakeholders adequately mitigate the risks posed by the CI in as few repetitions of the stress test as possible, which means that the CI will eventually obtain grade A or the regulator will require that the operation of CI be terminated (see Deliverable D5.1 for far more details on the use of the proposed grading system including the proposal of a penalty system).
STREST also reported on the incorporation of ST@STREST into the life cycle management (LCM) of non-nuclear CIs. Life cycle cost (LCC) and optimization tools are usually adopted to predict the performance of an infrastructure system subjected to long-term degradation process during its lifetime and to plan maintenance interventions. In order to optimize the LCC in a CI management strategy, the outputs of a stress test are included in the LCM framework. The outcomes of a stress test have an impact on: (1) Expected damages: unplanned LCCs related to the structural performance and associated repair costs due to extreme natural events. A stress test allows evaluating the performance of the CI against extreme natural events (according to the ST-Level adopted). In this way, it is possible to quantify the expected costs caused by extreme natural events and then evaluate the associated unplanned owner and user costs to be included in the LCC analysis and optimization; (2) Mitigation history: another outcome of a stress test is represented by the evaluation of risk reduction strategies based on a disaggregation analysis (Decision Phase, Phase 3). A disaggregation analysis is aimed at obtaining the probability that a specific value of a variable involved in the risk assessment is causative for the exceedance of a loss value of interest, hence providing information for new mitigation strategies (more details are given in Deliverable D5.3).
Finally, ST@STREST could be used to enhance societal resilience. However, further development is needed to: (1) identify resilience metrics and standardize methodologies to model the resilience of CIs, and (2) to understand how stakeholders’ needs depend on CIs, defining resilience-based acceptance criteria. To approach point 1, a review of resilience models and metrics was made. This topic, however, remains in the research domain for the time being, as there is still a substantial diversity among the definitions resilience and models used to evaluate it. To approach point 2, the definition of resilience metrics requires a deep understanding of the CI’s functionality and the parameters that are important to the CI operators and owners, and to the society the CI serves. Standardized approaches aimed at modelling and quantifying the resilience of non-nuclear CIs should be identified and developed in the future (read more in Deliverable D5.4).

The models and tools developed in STREST have been applied to our six pilot sites: the ENI/Kuwait oil refinery and petrochemical plant in Milazzo, Italy when impacted by earthquakes and tsunamis; the large dams in the Valais region of Switzerland under multi-hazard effects, considering earthquakes, floods, internal erosion, bottom outlet malfunctions, and hydropower system malfunction; the major hydrocarbon pipelines in Turkey, focusing to seismic threats at pipe-fault crossing locations; the Gasunie national gas storage and distribution network in Holland, exposed to earthquake and liquefaction effects; the port infrastructures of Thessaloniki in Greece, subjected to earthquake, tsunami and liquefaction hazards; and the industrial district in the region of Tuscany, Italy, exposed to seismic hazard. These case studies are representative of the CIs categories identified in STREST: A) individual, single-site infrastructures with high risk and potential for high local impact and regional or global consequences; B) distributed and/or geographically-extended infrastructures with potentially high economic and environmental impact, C) distributed, multiple-site infrastructures with low individual impact but large collective impact or dependencies. The successful application of the proposed ST@STREST methodology to the six different CIs demonstrates its viability and highlights the areas where additional developments are needed. Fig. 6 provides a brief summary of the results of stress tests performed at the six pilot sites. It combines all the obtained stress test grades for comparison not only of the risk posed by these civil infrastructures, but also of the stress test levels used in each case. Note that, while a significant effort was invested to develop the best possible stress test for each considered civil infrastructure, the obtained results do not reflect the actual safety or risk posed by these civil infrastructures. The data considered in this public project was limited for safety or business reasons. Details about each site are given below (Fig. 7; see also Deliverable D6.1 for further information).

A1) Application of stress test concepts to ENI/Kuwait oil refinery and petrochemical plant, Milazzo, Italy
PHASE 1: The refinery of Milazzo (Raffineria di Milazzo) is located in the north part of the island of Siciliy, in Italy. It is an industrial complex, which transforms crude oil into a series of oil products currently available on the market (LPG, gasoline, jet fuel, diesel and fuel oil) and comprises a number of auxiliary services. Total production currently stands at circa 9.3 million tons. The refinery has many storage tanks containing a large variety of hydrocarbons, such as LPG, gasoline, gasoil, crude oil and atmospheric and vacuum residues. The capacities of the tanks vary from 100 m3 (fuel oil, gasoil, gasoline, kerosene) to 160 000 m3 (crude oil). All tanks are located in catch basins (bunds) with concrete surfaces. Only the LPG is stored in pressurised spheres, all other substances are stored in single containment tanks. A filling degree of 80% is assumed. In a QRA the societal risk is determined. In order to do so, the (actual) presence of persons in the surroundings needs to be taken into account, since the numbers of persons present influences the societal risk. Only persons within the impact area of the site need to be taken into account. Persons on-site are not considered for the external risk.
PHASE 2: Natural events may dramatically interact with industrial equipment with different intensity and hazards. When NaTech risks should be considered, the natural hazards should be evaluated for the site under analysis, following some methods developed for instance in the STREST project. The probabilistic seismic hazard analysis (PSHA) concerns the port of Milazzo, discretized into a grid of forty-eight points (potential seismic event epicentres) with a grid spacing of approximately 25 km. For each single point on this grid, the Italian National Institute of Geophysics and Volcanology (INGV) provided the joint probability mass of strike, dip and rake, for a total of around three thousand four hundred “rupture scenarios” of probability. This information forms the basis of the elaboration, as it allows the probabilistic assignment of finite-fault geometries to all scenarios that enter the PSHA calculations (with the hazard curve in terms of peak ground acceleration, PGA). Probabilistic Tsunami Hazard Analysis (PTHA) is a methodology to assess the exceedance probability of different thresholds of hazard intensity, at a specific site or region in a given time period, due to any given source. Focus was on tsunamis of seismic origin, which is the dominant component of PTHA in most of the areas of the world, both in terms of occurrence and in terms of effects. Following the definition proposed in Lorito et al. (2015) (ASTARTE project), we dealt with Seismic PTHA (SPTHA), that is, tsunamis generated by co-seismic sea floor displacements due to earthquakes. The impact of natural hazards on the accident or release scenarios and frequencies followed the method described in D4.1, where equipment vulnerability with respect to the intensity of the natural events has been assessed by taking into account the construction characteristics of equipment and, more important, the new limit states based on the release of content. Flammable substances can be ignited upon release. Direct ignition will lead to a pool fire (liquids) or jet fire (gases). If a liquid is not ignited immediately, it will start to evaporate and a flammable atmosphere could be formed, which will disperse with the wind. If a gas is not ignited immediately, the gas will also disperse. Ignition of that flammable cloud will result in a flash fire, with possibly an explosion (causing overpressure effects), if the cloud is obstructed. It has been assumed that the consequences of a delayed ignition are minor compared to the consequences of a pool fire for the flammable liquids: only pool fires have been considered. A special phenomenon occurs upon the instantaneous release of a liquefied gas. An instantaneous release is followed by an instantaneous evaporation and a physical explosion, called Boiling Liquid Expanding Vapour Explosion (BLEVE). Often the gas cloud is ignited resulting in a fireball. The probability of ignition depends on the flammability of a substance and the quantity released. Not all substances present on site are considered individually and representative substances have been determined. Atmospheric residue, heavy vacuum gas oil and vacuum residue have not been considered in the risk analysis. For all flammable liquids considered, the ignition probability of K1-liquids is assumed: this is a conservative approach.
PHASE 3 (Fig. 7a): The calculated risks for each individual event (industrial, earthquake, or tsunami) were first compared. The industrial risks result in large contours relative to the CI spatial distribution, especially for the lower risk levels (10-7, 10-8/yr). These contours are dominated by the risks related to the LPG storage vessels. When only considering earthquake-induced risks the 10-7 and 10-8/yr risk contours are smaller. This is due to the lower release frequency for the LPG vessels. The higher risk levels (> 10-6/yr) are dominated by the atmospheric vessels. These have a higher release frequency compared to the industrial risks, resulting in larger 10-5 and 10-4/yr contours. When comparing the industrial risks with the earthquake induced risks, the risks on the east side of the site increased by a factor of approx. 1000. For the earthquake-induced risks, the 10-4/yr contour is located at almost the same location as the 10-7/yr contour of the industrial risks. This is due to the failure frequency of the atmospheric tanks, failure due to earthquake being a factor 1000 higher than failure due to industrial activities. The risks associated with tsunami-induced releases are the smallest of the three release causes. Only atmospheric vessels close to the shore would result in releases. Vessels located further away do not pose risks. The most dominant risks are the industrial and earthquake induced risks. For this pilot case, low risks (< 10-6 /yr) are dominated by the industrial risks as these risks are caused by failure of the LPG vessels. Earthquake and tsunami do not damage these vessels. Tsunami results in approximately 10 times higher risk along the transect line and earthquakes in approximately 1000 times higher risk. Naturally induced hazards cause an increase in the total risks. As a tsunami only damages a limited number of the vessels along the shoreline, the risk increase is limited. Similar results, for earthquakes only and simplified analysis for the earthquake hazard, have been found in previous works. The societal risk is mainly caused by the LPG tanks. By not accounting for the LPG tanks, the maximum number of fatalities is reduced from 1650 to 220. Up to approx. 200 fatalities the naturally induced hazards have a higher frequency of occurring, due to the higher failure frequency of the atmospheric vessels. Larger numbers of fatalities are only caused by industrial risks or by earthquakes - this is due to the failure of the LPG vessels.
PHASE 4: Naturally induced hazards can play an important role in the total risk associated with the presence of installations with dangerous goods. In this stress test, the effect of an increased frequency (caused by earthquakes or tsunamis) of a number of release scenarios on locational and societal risk was assessed. The impact of natural induced hazards depends on many (location specific) factors. For the specific site analysed in this work, a tsunami only damages a limited number of the atmospheric storage vessels along the shoreline. Hence the increase on the total risk is limited. Nonetheless, the overloading of emergency response should be considered, at least for the tanks along the coastline. Of more importance is the effect of an earthquake, which significantly increases the failure frequency of atmospheric storage tanks. Neither an earthquake nor a tsunami significantly increases the failure frequency of, and hence risk imposed by, pressurised vessels (like LPG spheres). As for the considered site, the risk is largely dominated by the LPG tanks when failing due to industrial-related causes, whereas the impact of the natural hazards is limited. All in all though, naturally induced hazards should be considered when determining the overall risk and the risks associated with natural disaster. This pilot case has been performed to show the impact of naturally induced hazards on the outcome of a quantitative risk assessment (QRA) of an industrial site holding dangerous substances. The aim was not to perform a detailed QRA of the pilot site (for such exercise much more detailed information would have been required) but merely to show how (the more common) scenarios are affected by an increased release frequency caused by earthquakes and tsunamis. Other scenarios that may be relevant in cases of earthquakes or tsunamis have not been evaluated. For instance failure of multiple tanks has not been taken into account. This may result in released volumes exceeding the catch basins capacity, and hence lead to larger pool sizes, especially in case of failure of catch basins. Also domino effects have not been considered. For instance, if a pool of flammable material extends to an area with LPG spheres, BLEVEs may occur. Neither has the effect of debris (or large objects like ships) carried land inward with a tsunami, been taken into account. Such phenomena will result in larger effect areas, and may hence increase the number of casualties. However, it should be realised that the natural hazards considered here (earthquakes and tsunamis) have a large areal impact. This means many of the fatalities calculated in the QRA, would have occurred also had such an installation not been present, because of the collapse of houses and other buildings.

A2) Application of stress test concepts to large dams in the Valais region of Switzerland
PHASE 1: The pre-assessment is arguably the most important part of the stress test of large dams. Very relevant on its own, the response of each element of the system to hazard-induced actions is essential and an input to the Monte Carlo probabilistic framework. This phase of the stress test was not particularly emphasized in STREST, as the recommendations, methodologies and models required to complete it are well established among the dam engineering community. Also, checking whether each component of the system is up to par with regulations is recommended practise already. That said, it is important to highlight that applying a probabilistic framework should not equate to a relaxation of the standards proposed for existing safety assessments based on deterministic principles. Although such a step was bypassed on the test case presented in STREST (conceptual dam), it is recommended that a full range of element’s responses be computed through the use of detailed models prior to the application of a probabilistic framework. In the case of constraining computational demands involved in doing so, it is recommended that a regression model be fitted from a number of computed actions/responses pairs (e.g. a catalogue of peak ground acceleration’s effects on sustained damages or degrees of functionality). Regression models of this type and their regression errors would allow for n-dimensional fragility functions to be prepared for posterior employment in fast-to-compute probabilistic Monte Carlo simulations.
PHASE 2: Classical risk analyses for dams generally correspond to very detailed scenario-based analyses considering only a few hazard interactions. In STREST, ST-L3c was tested with promising results, being therefore the recommended approach. Falling into a ST-L3d level, scenarios tested according to current practice should continue to be carried out for rare events of interest which are not easily reproduced using a purely probabilistic analysis. It is recommended that the risk study of a large dam is split into two main parts: the dam-reservoir system, whose analysis should yield failure conditions, modes, and expected frequency, and the downstream area, which should comprise the development of eventual breaches, the propagation of dam-break waves, and the evaluation of losses. Focusing on the dam-reservoir system, it is suggested that hazards are not studied individually, but rather that an inclusive modelling approach is implemented. As an example, STREST proposed the (simplified) scheme of hazards, elements, system states and interactions illustrated in Fig. 3. Such a scheme is dynamical in nature, and thus requires lengthy full simulations of the system to yield results. Some advantages of its use are a quantitative appreciation of occurrence frequencies, the representation of inter-actions, “intra-actions” and coincidences, and the possibility of accounting for aleatoric and epistemic uncertainties (i.e., GenMR framework; see also Mignan et al., in press). An example of results obtainable through the proposed dynamic simulations can be as follows: A failure by overtopping is prompted by the occurrence of two related earthquakes (T=12 600 years followed by T=3 200 years). While the dam withstood both, its outlet structures did not and recovery efforts were unsuccessful in rehabilitating them before the peak of the inflow season. Without a full simulation of the dam-reservoir system such a chain of events could certainly be imagined, but its probability would be hard to guess and, therefore, so would its relevance. Hazard coincidences appeared to affect the risk of the system (Matos et al., 2015). More relevantly still, including effects of epistemic uncertainty related to hazards and uncertainty at the level of the component’s responses led to an approximate 4-fold increase in risk in comparison to that of a system evaluated based on expected values. Moving downstream (Fig. 7b), assessing losses associated with a failure is an intricate process, which is, as the former, marked by uncertainty. One can start with the hydrograph that characterized the dam-break wave near the dam; in the case of embankment dams, closely related to the development of the breach. A range of different (but possible) floods resulting from a failure of the conceptual dam that were studied to investigate associated uncertainties. It is advised that the full range of possible dam-break waves is studied. This is, however, not straightforward. Current numerical models useful for studying the propagation of dam-break waves – particularly 2D, which are the generally more accurate than 1D ones – are computationally demanding and running them for a great number of possible outflow hydrographs may be quite unpractical. As proposed in STREST, which used the free BASEMENT software for hydraulic modelling, the problem may be overcome by resorting to non-linear regression models calibrated based on a limited but well-chosen catalogue of simulations. If well fitted, regression models can be used to estimate inundation parameters (e.g. max. water depth, max. water velocity, or wave arrival time) in any location of the computational domain for any possible dam breach event. A discussion of the challenges associated with the hydraulic modelling of sizeable areas can be found in Darcourt et al. (2016). The process summarized until now allows the calculation of inundation parameters and the evaluation of their probability of occurrence, including epistemic and aleatoric uncertainties. Tangible as well as intangible losses can be computed as a function of these. In essence, the methodologies already developed to do so within deterministic frameworks can be used. Due to this, the work that focused on large dams under STREST did not go into great detail in what concerns the estimation of losses, having stopped short of estimating costs or losses of life. It did, however, estimate damages to buildings based on fragility curves which, relying on a fortunate shortage of observations for dam-break events, were prepared following surveyed damages from tsunamis.
PHASE 3: The implementation of the previously summarized approach allows for the verification of a number of possible decision criteria. Compared to deterministic practices, it has the advantage of explicitly accounting for various sources of uncertainty and, therefore, rendering possible an effective disaggregation of risk (via the GenMR framework). Also, it can go well beyond an analysis that focuses on expected values and lay out the full probability distribution of system states and consequences. Based on 20 million full year-long simulations of the dam-reservoir system and downstream areas, an F-N curve characterizing the volume of collapsed or washed away buildings due to the failure of the conceptual dam under study exemplifies what kind of results can and should be derived from the proposed probabilistic framework. Naturally, such a curve can be extended to loss of life or any other criterion that is considered relevant. Detailed losses associated with deterministic scenarios are useful for decision-making and will continue to be so for the foreseeable future. As such, it is not proposed that they are replaced by probabilistic approaches. Notwithstanding, in the face of the additional information that the latter can bring to traditional deterministic safety assessments and their enormous potential, it is recommended that probabilistic approaches are employed and developed further.
PHASE 4: Reporting based on analyses complemented with probabilistic frameworks such as the one tested within STREST have the advantage of being apt to attribute probability to losses and, thus, lead to formal risk estimations. An example of formally framed risk that can be included in a report is the expected damages. Going further than that, one can present results that draw from the full probabilistic distribution of damages, such as the probability of states of interest (buildings collapsed of washed away), not necessarily being those with greater probability of occurrence. Evidently, reporting is not – nor should it be – limited to such damage states. Examples would be loss of life probabilities or information on flood timing. Perhaps most relevantly, it is recommended that reports contain objective evaluations of the uncertainty of the presented results and detailed sensitivity analyses that support them.

B1) Application of stress test concepts to major hydrocarbon pipelines, Turkey
PHASE 1: The data collection includes the major seismic hazard levels that are likely to affect the pipeline as well as the mechanical properties of the BTC pipeline and critical pipeline components that are likely to be affected from the target hazard levels. As the transmission pipeline, the investigated BTC pipeline is 1758 km long with a daily oil transportation of world's ~1% daily petroleum output (about 1 million barrels). The BTC pipeline yearly natural gas excess capacity (as of today) is 30 billion cubic meters. The pipeline diameter is 42 inches throughout most of Azerbaijan and Turkey. The pipeline diameter scales up to 46 inches in Georgia and reduces to 34 inches for the last downhill section to the Ceyhan Marine Terminal in Turkey. The BTC pipeline facilities include: 8 pump stations (2 in Azerbaijan, 2 in Georgia, 4 in Turkey), 2 intermediate pigging stations, 1 pressure reduction station and 101 small block valves. The transmission pipeline features high quality continuous buried pipes. Unlike water pipelines, which are generally constructed as segmented pipes, the continuous steel pipelines are more likely to suffer damage due to permanent fault displacements (PFDs) rather than ground strains caused by seismic wave propagation. Therefore, the fault displacement (offset) induced by earthquakes is defined as the target hazard. Five main pipe-fault crossing locations are identified along the BTC route for PFD hazard. The hazard information (e.g. fault name, fault length, style-of-faulting, fault geometry, etc.) as well as normalized locations of pipe-fault crossings (l/L), pipe-fault crossing angles, etc. are collected at these pipe-fault intersections. The pipe cross sections at these five locations have the same pipe diameter (42 inches or 1.0688m) and the same thickness (20.62mm). The mechanical properties of the pipe and the soil conditions surrounding the pipe at the fault-pipe intersections of interest are also identified as part of data collection phase because they are essential for the calculation of pipeline strains. The pipeline rupture or loss of pressure integrity (pipeline failure) along BTC pipeline due to fault offsets is identified as the risk measure. The risk objectives of the BTC pipeline are determined from the Guidelines for the Seismic Evaluation and Upgrade of Water Transmission Facilities. In reference to the risk grading system proposed in STREST, pipeline failure at different probabilities under 2475-year PFD is defined as the risk objective. In the test set-up phase, the Level 1a (ST-L1) is selected as the component-based risk assessment. Level 2a (ST-L2a) is selected at the system level. The penalties in the stress tests due to modelling deficiencies of pipeline are disregarded at all levels. Modelling uncertainties (fault mapping accuracy and fault complexity) affecting the seismic hazard are accounted for during the computation of probabilistic PFD.
PHASE 2: In the hazard-based assessment, the 2475-year fault displacements at five pipe-fault crossings are computed from the Monte-Carlo based probabilistic PFD hazard and are compared with the prescribed American Lifelines Alliance (ALA, 2001-2005) hazard requirements. The 2475-year PFD hazard level is the recommended hazard level for continuous pipelines by the pipe seismic design provisions in ALA. The comparisons indicate that of the five pipe-fault crossings, the computed 2475-year PFD hazard at #2, #3 and #4 pipe-fault crossings are larger than the ALA requirements. On the other hand, the computed 2475-year PFD hazard at #1 and #5 pipe-fault crossing are in compliance with the ALA requirements. The tensile pipe strain under the computed 2475-year PFD (design level PFD for pipeline) is compared with the allowable tensile pipeline strain provided in ALA seismic guidelines. The allowable tensile pipe strain is designated as 3% in the ALA seismic pipeline design provisions. The comparisons are done for all five pipe-fault crossings and the tensile strains at these pipe-fault crossings comply with the code requirements. In the risk-based assessment at system level, the seismic risk of pipeline failure is evaluated by comparing the annual exceedance rate of pipeline failure with the suggested allowable pipeline failure rates in the literature. The probabilistic pipeline seismic risk against fault rupture is achieved by integrating the probabilistic fault displacement hazard, mechanical response of pipe due to fault displacement and empirical pipe fragility function (Cheng and Akkar, 2016). The concept is similar to the conventional probabilistic seismic risk assessment. Since both tensile and compressive strains developed along the pipe during an earthquake can cause pipe failure, the seismic risk of pipe failure should consider the aggregated effects of these two strain components. The annual failure probability for pipelines at fault crossings is computed for different pipe-fault crossing angles, associated with their uncertainty to manifest the inherent complications during the fault rupture process. The inaccuracy in fault-pipe crossing angle is modelled by a truncated normal probability with alternative standard deviations of 2.5° and 5°. The literature is limited in addressing the expected performance of steel pipelines transmitting gas or oil under extreme events. It is suggested in the literature that annual failure rates less than 1/24750 (4.04E-5) are acceptable rates for pipelines under seismic loading. The same limit is also used in the current US building standards as the target annual probability against building collapse under earthquake-induced loads. We also used the same annual failure rate for probabilistic risk-based pipeline assessment at the five designated pipe-fault crossings. The comparisons of allowable annual pipe failure rate (4.04E-5) with those calculated at each fault pipe-fault crossing indicate that pipe-fault crossings #3 and #4 are not safe as their computed failure rates are larger than the allowable annual failure rate. The hazard at these fault crossings is relatively higher with respect to the other three pipe-fault crossings that might explain the higher failure rates. The pipe-fault crossing angles are low at these pipe-fault crossings that also make the pipe segments more vulnerable to the accumulation of tensile strains. The pipeline annual failure rates at the five pipe-fault crossings are used to compute the aggregated failure risk along the whole BTC pipeline to complete the probabilistic risk assessment at system level (ST-L2). To this end, two marginal probabilities were computed: (a) perfect correlation between pipe failures at the five pipe-fault crossings and (b) independent pipe failures at the five pipe-fault crossings. The aggregated risk is defined as the annual exceedance probability of the pipeline failure. Using the theory described in D6.1, the aggregated marginal failure probabilities of BTC pipeline (i.e., perfect correlation and independent failures) exposed to 2475-year PFD hazard are very high. The computed failure probabilities range between 40% and 50%. According to the STREST grading system, the pipeline risk in the project falls in to Grade B: possibly unjustifiable risk. These calculations conclude that there is a need of retrofitting the pipes at pipe-fault crossings.
PHASE 3: The probabilistic pipe failure risk assessment yield higher probabilities of pipe failure at #3 and #4 pipe-fault crossings. Therefore, the pipeline segments at #3 and #4 pipe-fault crossings are identified as critical components to the aggregated BTC pipeline failure risk. Thus, these pipe segments are selected to be retrofitted. We also decided to reconsider the pipe-fault crossing angle uncertainty at pipe-fault intersection #5. Pipeline under compression would result in larger damage when pipe-fault crossing angle is larger than 90°, which might be the case under the consideration of pipe-fault crossing angle uncertainty at intersection #5. The effective retrofitting of the pipeline segments at these three pipe-fault crossings is to change the pipe-fault intersection angle. Therefore, in the risk mitigation strategies, we changed the intersection angles of all these three pipe-fault intersection angles to around 80o. The reduction in pipeline failure risk is evident when compared to the current pipe failure probability that is around 40% to 50%. Considering the marginal correlations of the pipe failure risk at each fault crossing, the probability of BTC pipeline failure due to fault offset is at the most ~2% under 2475-year PFD. According to the grading system proposed, the BTC pipeline risk after the proposed mitigation strategy would become grade AA (negligible risk).
PHASE 4: The seismic risk of BTC pipeline at the fault crossings is graded as B at the system level assessment, according to the grading system for the global outcome of the stress test. Grade B is defined as “possibly unjustifiable risk”. All five fault crossings contribute to this risk but the #3 and #4 fault crossings that are located on the North Anatolian fault zone and Deliler fault zone are identified as the critical components. As pipe-fault crossing angle is an important parameter in pipeline risk mitigation imposed by permanent fault offsets, the proposed plan is to change the pipe-fault intersection angle at the most critical pipe-fault crossings. In essence, the pipe segments at the fault crossings #3, #4 and #5 are decided to be retrofitted by changing their pipe-fault intersection angles from 30o, 40o, and 90o to 80o. Note that the pipe segment at the intersection #5 with the design pipe-fault crossing angle of 90o is changed to 80o to consider the uncertainty due to fault mapping inaccuracy. A small deviation in angle arising from the modelling deficiency of mapped faults may result in pipe-fault crossings larger than 90o that would yield critical compressive strains at the pipe cross-section. A change in angle at the fault crossings #3, #4 and #5 yields a grade as AA that would correspond to “negligible” risk in the BTC pipeline.

B2) Application of stress test concepts to Gasunie national gas storage and distribution network, Holland
PHASE 1: For this case study, a sub-network is selected located in the induced-earthquake prone area, directly above the main Groningen gas field. The sub-network selected covers an area of approximately 3360 km2. It contains 4 MPa (40 bar) to 8 MPa (80 bar) main gas transmission pipes, with a total length in the order of 1000 km. Different pipe diameters are present within this sub-network ranging from 114 millimetres (4 in) to 1219 mm (48 in). Apart from 426 valve stations, it contains compressor stations, measure and regulations stations, reducing stations and a mixing station (11 in total). With respect to the end nodes of the sub-network: 15 feeding stations and 91 receiving stations are accounted for, the latter being sub-divided into approximately 40 industrial, 50 municipal and 1 export. GIS databases were obtained from Gasunie-GTS with properties of the pipeline system: coordinates, diameter, wall-thickness, material, yield stress, maximum operational pressure and depth of soil cover. As the database originally contained 6378 unique valued combinations of diameter, wall-thickness, yield stress and pressure, these properties were first classified into groups in order to reduce the number of unique sets. This resulted in 136 different pipe configurations. Apart from source and demand stations as end nodes, M&R (measure and regulation stations) are present between nodes. Although labelled only as M&R, they in fact stand for compressor stations, measure and regulations stations, reducing stations or a mixing station. The target hazard is the seismic hazard as a result of the gas extraction from the Groningen gas field. Numerous studies have been performed over the past several years and are still on-going leading to more and more refined models dedicated to the Groningen area. In the current stress test one of the earlier versions is being adopted. The seismic models adopted were taken from the literature for seismic zonation, GMPE, magnitude distribution, maximum magnitude (In the context of the stress test analysis, a value of 6 is applied, although ongoing studies indicate a value of 5) and annual event rate (events with M≥1.5 set to 30 occurrences per year, although current studies tend to a value of 23). Serviceability Ratio (SR) and Connectivity Loss (CL) were used as risk measures. The Serviceability Ratio (SR) is directly related to the number of demand nodes in the utility network, which remain accessible from at least one source node following an earthquake. Connectivity Loss (CL) measures the average reduction in the ability of endpoints to receive flow from sources counting the number of the sources connected to the i-th demand node in the original (undamaged) network and then in the possible damaged network after an earthquake event. In the Netherlands a standard for quantified risk assessment (QRA) exists, known as “the coloured books”. They were issued by the national “Committee for the Prevention of Disasters” (CPR) and describe the methods to be used for modelling and quantifying the risks associated with dangerous materials. Installations, types and frequencies of loss of containment (LOC), calculation methods, risk acceptance criteria and even the computer program to use are prescribed. Currently the computer program CAROLA is used for these calculations in the Netherlands, which is based on PIPESAFE. In the current application of the stress test methodology to the Gasunie-GTS case, no full QRA was performed for the 1000 km sub-network. However, values for the annual failure rates originally prescribed in the purple book and adjusted values nowadays used for the Gasunie network in CAROLA were selected to define grade boundaries. Using these boundary values for the grading system enables the asset owner Gasunie-GST to relate the outcome of the stress test to its own QRA’s as performed with CAROLA with non-earthquake related failure frequencies. For illustrative purposes only, indicative grading boundaries were attributed to the values of the performance parameter connectivity loss (CL). No actual calibrations for these bounds with respect to economic loss or fatalities exist yet for the sub-network at hand and the grading is indicative and provisional. The stress test has been performed up till level ST-L2a with the earthquakes as single hazard. Accuracy levels targeted are classified as Advanced. The method adopted for the component level assessment is risk-based while the System level was performed according to the performance-based earthquake engineering (PBEE) framework. Site-specific hazard analyses were performed and structure specific fragility functions being used.
PHASE 2: Sampled results (failure, no failure) per component from the Monte Carlo (MC) analysis of the network analysis were used to calculate annual failure frequencies for pipes and stations. The methodology for the evaluation of seismic performance of the network under study consists of five major steps:
o Seismic hazard assessment of the region considering gas depletion as source of the seismic activities.
o Evaluation of the PGA, PGV and displacement (liquefaction) hazard, in order to estimate the seismic demand.
o Seismic demand evaluation at each facility and the pipe sections within the network to obtain the failure probability using appropriate fragility curves.
o Vulnerability analysis through the use of a connectivity algorithm to integrate the damage of facilities and pipe sections into the damage of the system.
o Probabilistic risk assessment of the case study using MC simulation in terms of mean functionality and annual exceedance curve.
The probability of soil liquefaction was investigated for soil conditions in the Groningen Area. The study is based on the Idriss-Boulanger model and is described in detail in Miraglia et al. (2015). Two soil profiles based on CPT tests were analysed by describing the soil properties as stochastic parameters and sampling the liquefaction response of the layers with earthquake events. The sampling results were summarized as a fragility curve as a function of PGA values. Soil liquefaction can cause permanent displacements, expressing themselves in lateral displacements and or settlements. Besides that, depending on the weight of the pipe segments relative to the volumetric weight of the liquefied soil, pipe segments will start floating or sinking due to gravity. From these three aspects only the latter is considered: substantial lateral spread is not expected in the flat Groningen area, settlements are assumed to be confined to dozens of centimetres, whereas uplift due to buoyancy can reach the value of the soil cover depth. Structural reliability calculations were performed for each of the 136 pipe configurations (distinct sets of values for diameter, wall thickness, yield stress and gas pressure). In these calculations the pipe properties were treated as stochastic variables. The limit state function was formulated as the von Mises stress due to gas pressure as well as bending due to uplift against the yield stress. The bending stress due to uplift was calculated in a mechanical model in which the pipe is embedded in stiff soil at its endings and is allowed to uplift towards ground level. The length of the pipe in liquefied soil was adjusted during the limit state evaluations such that this maximum value (cover depth) is reached. Gas pressure was modelled as a normal distribution with a 10% coefficient of variation. Diameter, wall thickness and yield stress are log-normally distributed with variation coefficients of respectively 3, 3 and 7 %. Finally, the soil cover depth was also modelled stochastically: normally distributed with a mean value of 1.5 m and 20% coefficient of variation. FORM analyses were performed for each situation (pipe configuration) and reliability indices were calculated as a proxy for the probability of failure given liquefied soil. For transient load effects structural reliability calculations were performed for different pipe geometries with a limit state based on Newmark’s shear wave formulae of seismic strain for buried pipelines. In addition stresses due to gas pressure and due to initial curvatures in the pipeline stretches were accounted for. The same stochastic parameters as above were used for diameter, wall thickness, pressure and yield stress. The curvature was modelled as a lognormal distribution with mean 2000 m and 100 m standard deviation. In addition the shear wave velocity Vs30 was modelled as a lognormal stochastic variable having a mean of 200 m/s and a standard deviation 20 m/s. Length effects of transient loading related pipe failure were modelled by implementing a repair rate model according to ALA (2001). For the stations, existing fragility curves have been selected. The one selected corresponds to a moderate damage state and is described as a lognormal cumulative distribution with a mean of 2.4 m/s2 and a coefficient of variation of 60%. The moderate damage state was selected since it is the first damage state with such malfunctioning that it leads to connectivity loss and a decrease in serviceability ratio. Besides this, the station types vary between compressor stations, measure and regulations stations, reducing stations or a mixing station. All with different mechanical and or electrical components and some sheltered in one story masonry buildings and others in open air. Choosing the moderate damage state is partly motivated by selecting a conservative envelope for all types. Seismicity, network, and network properties were modelled with the OOFIMS tool and Monte Carlo simulations were performed. Network performance in terms of connectivity loss (CL) and serviceability ratio (SR) were defined as the primary indicators in the stress test (Fig. 7c). The analysis showed a good performance with respect to CL: the annual probability of having a connectively loss of e.g. 50% or more is 3.6 10-5. Annual exceedance frequencies for the serviceability ratio (SR) showed very high exceedance frequencies for all values of the serviceability ratio, with only a drop at the very end of the loss axis near SR reaching one. Hence it shows a high robustness of the network, indicating a vast redundancy in possible paths between demand and source nodes.
PHASE 3: An “as low as reasonable practicable (ALARP)” grade of the risk was targeted for the gas transport network to pass the stress test. The following results are obtained for the pipe sections and the stations:
o Pipe sections: Most pipe sections obtained grade AA, some obtained grade A. The pipe sections passed the stress test.
o Stations: Most stations were classified within grade AA or A. Some, near or within the seismic zone, obtained grade B. The stations partly passed the stress test.
These findings are obtained despite a number of conservative assumptions made with respect to fragilities. Also the seismic demand was modelled in a conservative way with a maximum magnitude of 6 and a b-value of 0.8 for seismic zone 3 in the Gutenberg-Richter model. Disaggregation analyses were performed with respect to seismic events (magnitudes and zones), pipe components and stations. First network performance losses (CL>0.1) are found from M>3.5 onwards and for extreme values of performance losses (CL>0.9) a substantial contribution from seismic zone 4 is observed. Pipe sections, which have a major contribution to performance loss, were identified by selecting events with a low performance loss (CL<=0.1). Specific pipe sections to the South-West and North-East of seismic zone 3 were localized. As all stations were modelled with the same fragility functions the only discriminate factor in selecting the most vulnerable ones would be their distance with respect to possible seismic sources. Combined with the number of pipe connections at the corresponding interconnecting stations, their possible contribution to the network performance loss can be identified. With respect to sensitivity analysis the impact on network performance of the maximum event magnitude as well as the impact of the value for the annual event rate was investigated. With respect to the component grading it holds that confining the maximum events to M=5 leads to all stations being in either grade A or AA. Likewise, when the annual rate is set to 23 per year, no more stations being in state B are found. With respect to the network as a whole, a strong redundancy in the paths from demand to source nodes was taken into account. This is a strong feature for obtaining the stress test results. With respect to components, both types (pipe sections and stations) are found to contribute evenly to the network performance indicators as can also be concluded from the component level assessments. From these:
• Specific pipe sections can to some extend be identified as being a weakest link in the network. These sections should be checked on their current actual state assessing the need for upgrading.
• For the stations a rather strong assumption was made with respect to the fragility curve adopted. These should be quantified in more detail and depending to findings retrofitting of stations might be necessary.
In the current analysis soil liquefaction was the dominant failure mechanism. As much uncertainty still exists in the liquefaction fragilities for the Groningen area, further studies into these fragilities and their geographical distribution is recommended.
PHASE 4: Reporting, in terms of the grade, the critical events, the guidelines for risk mitigation, and the accuracy of the methods adopted in the stress test is accomplished by the report given in D6.1. In addition to this report, a presentation has been given at the Gasunie-GTS.

B3) Application of stress test concepts to port infrastructures of Thessaloniki, Greece
PHASE 1: A GIS database for the port facilities was developed by the Research Unit of Soil Dynamics and Geotechnical Earthquake Engineering at Aristotle University of Thessaloniki in collaboration with the port Authority in the framework of previous national and European projects and it is further updated in STREST project. Waterfront structures, cargo handling equipment, buildings (offices, sheds, warehouses etc.) and the electric power supply system were examined. The SYNER-G taxonomy was used to describe the different typologies. Waterfront structures include concrete gravity block type quay walls with simple surface foundation and non-anchored components. Cargo handling equipment has non-anchored components without backup power supply. Four gantry cranes are used for container loading-unloading services located in the western part of the 6th pier. The electric power supply to the cranes was assumed to be provided through non-vulnerable lines from the distribution substations that are present inside the port facilities. They are classified as low-voltage substations, with non-anchored components. In total, 85 building and storage facilities were considered in the analyses. The majority is reinforced concrete (RC) buildings comprising principally of low- and mid-rise infilled frame and dual systems with low or no seismic design. The steel buildings are basically warehouses with one or two floors while the unreinforced masonry (URM) buildings are old low-rise and mid-rise structures. Soft alluvial deposits, sometimes susceptible to liquefaction, characterize the Port subsoil conditions. The thickness of these deposits close to the sea may reach 150 m to 180 m. A comprehensive set of in-situ geotechnical tests (e.g. drillings, sampling, SPT and CPT tests), detailed laboratory tests and measurements, as well as geophysical surveys (cross-hole, down-hole, array microtremor measurements) at the port broader area provide all necessary information to perform any kind of site specific ground response analyses. Complementary geophysical tests including array microtremor measurements have been conducted in the frame of STREST project at four different sites inside the port using the SPatial Autocorrelation Coefficient–SPAC method. A topobathymetric model was also produced for the tsunami simulations, based on nautical and topographic maps and satellite images. The elevation data includes also the buildings and other structures that affect the waves while propagating inland. The resolution of the model is higher in the area of the Port. The vulnerability of the Port facilities at component level (i.e. buildings, waterfront structures, cranes etc.) is assessed through fragility functions, which describe the probability of exceeding predefined damage states (DS) for given levels of peak ground acceleration (PGA), permanent ground displacement (PGD) and inundation depth for the ground shaking, liquefaction and tsunami hazards respectively. The fragility functions used to assess the damages due to liquefaction are generic, while the models used for ground shaking are either case specific or generic. New seismic fragility curves have been developed for typical quay walls and gantry cranes of the port subjected to ground shaking based on dynamic numerical analyses in collaboration with the National Technical University of Athens. Analytical tsunami fragility curves as a function of inundation depth have been developed for representative typologies of the Port RC buildings, warehouses and gantry cranes (Karafagka et al., 2016; Salzano et al., 2015) while, for simplicity reasons, the waterfront structures were considered as non-vulnerable to tsunami forces. The damage states are correlated with component functionality in order to perform the risk assessments in the system level. The following assumptions were set: (i) the waterfront-pier (berth) is functional if damage is lower than moderate, (ii) the crane is functional if damage is lower than moderate and there is electric power supply (i.e. the physical damages of the substations are lower than moderate) (iii) the berth is functional if the waterfront and at least one crane is functional. In the Pre-Assessment phase, specific risk measures and objectives are defined related to the functionality of the port at system level and the structural losses at component level. Since two terminals (container, bulk cargo) were assumed herein, the system performance is measured through the total number of containers handled (loaded and unloaded) per day (TCoH), in Twenty-foot Equivalent Units (TEU), and the total cargo handled (loaded and unloaded) per day (TCaH), in tones. Risk measures related to structural and economic losses of the buildings were also set for the tsunami case and the scenario based assessment. Since no regulatory boundaries exist for the moment for port facilities, continuous and scalar boundaries were defined based on general judgment criteria for the probabilistic and scenario based system-wide risk assessment respectively in order to demonstrate the application of the ST@STREST.
PHASE 2: In the component level assessment, the aim was to check each component of the port independently for earthquake and tsunami hazards in order to show whether the component passes or fails the pre-defined minimum requirements for its performance implied by the current codes. A risk-based assessment has been performed using the hazard function at the location of the component and the fragility function of the component. These two functions are convolved in risk integral in order to obtain probability of exceedance of a designated limit state in a period of time. H(IM) is the hazard function and k is the logarithmic slope of the idealized hazard function. ko is a constant that depends on the seismicity of the site. Proper k and ko can be obtained by fitting the actual hazard curve provided that the entire hazard function or at least two points from the hazard function are available. For the seismic case (i.e. ground shaking), k and ko were computed from the hazard curve corresponding to return periods equal to 475 and 4975 years for the normal and the extreme event respectively based on the site specific response analyses carried out for three representative soil profiles (scenario-based assessment). For the tsunami case, at least two points of the mean hazard function estimated from probabilistic tsunami hazard assessment at various locations in the port area were used to estimate these parameters. In this application the target probability of exceedance of the collapse damage state was set to 1.0·10-5 based on the existing practice corresponding to an acceptable probability equal to 0.05% in 50 years and was properly modified based on EC8 prescriptions to account for the importance factor γΙ of the structure. To check whether or not the component is safe against collapse, the target probability was compared with the corresponding probability of exceeding the ultimate damage state. As an example the proposed performance assessment approach was applied here to a strategic building of the Port, the passenger terminal, which is a low-rise infilled dual system (γΙ =1.2). The probability of exceeding the ultimate damage state, which in this study corresponds to the collapse damage state, was computed and compared with the target probability of collapse for both earthquake and tsunami hazards. The hazard function at the location of the structure was estimated as 10-5 and 1.7·10-4 for the seismic and tsunami case respectively, while the corresponding probabilities of collapse were finally computed equal to 1.4·10-3 and 2.0·10-4. These probabilities are higher than the target (acceptable) probability of collapse estimated equal to 4.7·10-6 and 7.9·10-6 for the seismic and tsunami case respectively, indicating that the structure is not safe against exceedance of the collapse limit state due to the considered hazards. Similar results were generally derived for all buildings and infrastructures providing a general assessment of the performance and resilience of the Port. The system wide probabilistic risk assessment (PRA) was made separately for ground shaking, including liquefaction, and tsunami hazard, according to the methodology developed in SYNER-G and extended in STREST (Kakderi et al., 2015). The objective was to evaluate the probability or mean annual frequency (MAF) of events with the corresponding loss in the performance of the port operations. The analysis was based on an object-oriented paradigm where the system is described through a set of classes, characterized in terms of attributes and methods, interacting with each other. In the present application, the systemic analysis concerned the container and bulk cargo movements affected by the performance of the piers, berths, waterfront and container/cargo handling equipment (cranes) while the interdependency considered was between the cargo handling equipment and the Electric Power Network (EPN) supplying to cranes. The capacity of berths is related to the capacity of cranes (lifts per hour/tons per hour). The functionality state of each component and the whole port system was assessed based on the computed physical damages, taking also into account system inter- and intra-dependencies. Regarding the analysis of the interdependencies we assumed that if a crane node is not fed by the reference EPN node (i.e. electric supply station) with power and the crane does not have a back-up supply, then the crane itself is considered out of service. The functionality of the demand node is based on EPN connectivity analysis. The seismic hazard model provides the means for: (i) sampling events in terms of location (epicentre), magnitude and faulting type according to the seismicity of the study region and (ii) maps of sampled correlated seismic intensities at the sites of the vulnerable components in the infrastructure (“shakefields” method). When the fragility of components is expressed with different IMs, the model assesses them consistently. Five seismic zones with Mmin=5.5 and Mmax=7.5 were selected based on the results of SHARE and a published GMPE to estimate the outcrop ground motion parameters. Seismic events were sampled for the seismic zones affecting the port area through a Monte Carlo simulation (10,000 runs). For each site of a regular grid of points discretizing the study area, the averages of primary IM (PGA) from the specified GMPE were calculated, and the residual was sampled from a random field of spatially correlated Gaussian variables according to the spatial correlation model. The primary IM was then retrieved at vulnerable sites by distance-based interpolation and finally the local IM was sampled conditionally on primary IM. To scale the hazard to the site condition the amplification factors proposed in EC8 were used in accordance with the site classes that were defined in the study area. HAZUS and the modelling procedure of OpenQuake were applied to estimate the permanent ground displacements (PGDs) due to liquefaction. The performance indicators (PIs) of the port system for both the container and cargo terminal were evaluated for each simulation of the MC analysis based on the damages and corresponding functionality states of each component and considering the interdependencies between components. The final computed PIs were normalized to the value referring to normal (non-seismic) conditions assuming that all cranes work at their full capacity 24 hours per day. For performance loss values below 40% TCaH yields higher values of exceedance frequency, while for performance loss over 40% TCoH yields higher values of exceedance frequency. A full SPTHA (Seismic Probability Tsunami Hazard Analysis) for tsunami of seismic origin, following Lorito et al. (2015) / ASTARTE project, has been developed based on inundation simulation of the Thessaloniki area. Focus was only on tsunamis of seismic origin. A very large number of numerical simulations of tsunami generation, propagation and inundation on high-resolution topo-bathymetric models are in principle required, in order to give a robust evaluation of SPTHA at a local site. To reduce the computational cost, while keeping results stable and consistent with respect to explore the full variability of the sources, a method has been developed to approach the uncertainty in SPTHA. For the Thessaloniki port (Selva et al., 2016), a regional SPTHA was considered which accounts for all the potential seismic sources from the Mediterranean Sea (>107 sources), implementing a large number of alternative models to explore the epistemic uncertainty (>105). Then, a 2-layer filtering procedure has been applied, obtaining 253 representative scenarios, which may be modelled to approximate the total hazard. The numerical simulations were performed using a non-linear shallow-water multi-GPU code, using 4-level nested bathymetric grids with refinement ratio equal to 4 and increasing resolution from 0.4 arc-min (~740 m) to 0.1 arc-min (~185 m) to 0.025 arc-min (~46 m) to 0.00625 arc-min (~11 m). The results were inputs to an Ensemble model, in order to quantify in each point of the finest grid hazard curves, along with epistemic uncertainty, for two intensity measures: maximum flow depth and maximum momentum flux. To assess the tsunami risk a hazard module has been developed in order to enable sampling among the 253 representative scenarios, considering the probability of occurrence of the cluster of sources that each scenario represents. This procedure is possible for any preselected alternative model of input to the SPTHA ensemble, enabling the propagation of hazard epistemic uncertainty into risk analysis. The inundation simulation results for each sampled scenario are then loaded, in order to retrieve the tsunami intensity for any selected location. Given that the inundation simulation does not integrate potential collapses, tsunami intensity should be retrieved in proximity of each component’s perimeter and outside the structure. In order to avoid any unwanted biases (e.g. retrieve the tsunami intensity over the roof of buildings, where the modelled tsunami flow depth is subtracted the height of the building), a characteristic radius has been assigned to each component, and the largest intensity value within the defined circle obtained. Damages and non-functionalities were then sampled from the respective fragility curves and the retrieved tsunami intensities. The analysis has been implemented for the port infrastructures (cranes, electric power network components and individual buildings) and the PIs for the analysed system were evaluated. The container terminal is not expected to experience any loss (TCoH), while the loss in the cargo terminal (TCaH) is negligible. This is due to the non-vulnerable condition of waterfront structures, the high damage thresholds for the cranes (i.e. high inundation values that are not expected in the study area) as described in the fragility curves used in the application and the distance of the electric power substations from the shoreline. The annual probabilities for buildings collapses are also low. As an example 10% of the total buildings in the Port (~9 structures) will be completed damaged under tsunami forces with annual probability equal to 5·10-5. A scenario-based system-wide seismic risk analysis was performed complementary to the classical PRA approach described previously, to identify as accurately as possible the local site response at the port area and to reduce the corresponding uncertainties. Two different seismic scenarios were defined in collaboration with a pool of experts: the standard seismic design scenario and an extreme scenario corresponding to return periods of Tm=475 years and Tm=4975 years respectively. For the 475 years scenario, the target spectrum was defined based on the disaggregation of the probabilistic seismic hazard analysis. This study has shown that the most significant contribution to seismic hazard for Thessaloniki port is associated with the Anthemountas fault system (i.e. a normal fault) regardless of the return period. In particular, for the 475 years scenario, the maximum annual exceedance probability for a certain PGA value with a moment magnitude Mw of 5.7 and an epicentral distance Repi of 14.6 km was provided. For the 4975 years scenario, an extreme rupture scenario breaking along the whole Anthemountas fault zone with a characteristic magnitude Mw of 7.0, close to the maximum magnitude of the seismic source, was assumed. In addition to magnitude and distance, both hazard scenarios include an error term (which measures the number of standard deviations of logarithmic residuals to be accounted for in GMPE) responsible for an appreciable proportion of spectral ordinates and the contribution from the error term grows with the return period. Thus, the median spectral values plus 0.5 standard deviations and 1 standard deviation were considered for the 475 years and the 4975 years scenarios respectively. A set of 15 accelerograms was selected for the 475 years scenario referring to rock or very stiff soils that on average fit the target spectrum. For the extreme scenario, 10 synthetic accelerograms were computed to fit the target spectrum (4975 years scenario I) and broadband ground motions were generated using 3D physics-based “source-to-site” numerical simulations (4975 years scenario II). Three representative soil profiles (denoted as A, B and C) were considered for the site response analyses (with fundamental periods equal to 1.58s, 1.60s and 1.24s, respectively). The soil profiles have been defined based on previous studies and new measurements. 1D equivalent-linear (EQL) and nonlinear (NL) site response analyses including also the potential for liquefaction were carried out for the three soil profiles using as input motions at the seismic bedrock the ones estimated for the 475 years and 4975 years seismic scenarios (I and II). The existing numerical codes Strata and Cyclic1D were used. To investigate the impact of the uncertainty in the shear wave velocity (Vs) profiles, the analyses were performed for the basic geotechnical models, considering a standard deviation of the natural logarithm of the Vs equal to 0.2. In particular, 100 realizations of the Vs profiles were considered in Strata using Monte Carlo simulations and the calculated response from each realization was then used to estimate statistical properties of the seismic response. In total 1500 and 1200 simulations were performed for the 475 and 4975 (I and II) scenarios respectively. The corresponding site response variability was assessed in Cyclic1D considering except for the basic Vs model, upper-range and lower-range models utilizing a logarithmic standard deviation for the Vs profile equal to 0.2 consistently with the Strata simulations. For the EQL approach the results were presented in terms of PGA with depth, acceleration response spectra and spectral and Fourier ratios. For the NL approach, the variation of horizontal and vertical PGD, maximum shear strain and stress, effective confinement and excess pore water pressure with depth were also computed for each analysis. The spectral values and shapes are generally comparable between the two approaches for the 475 year scenario while the response is very different for the extreme scenario that is associated with increasing shear strain accumulation. For both scenarios, the EQL spectral shapes are flatter and have less period-to-period fluctuations than the NL ones. The lower spectral values predicted by the NL approach for the extreme seismic scenario could be attributed to the liquefaction that may also result in large permanent ground deformations, which cannot be simulated by the EQL analysis. The results of the NL approach indicate that liquefaction is evident for all soil profiles and scenarios. However, for the extreme scenario the liquefiable layers are larger and extended to greater depths (up to 35m). Generally low-frequency input motions increase the accumulation of lateral deformations and settlements. The computed maximum horizontal displacement values when considering the basic geotechnical models are 4.5 cm and 18.6 cm for the 475 and 4975 years seismic scenarios respectively, while the corresponding values for the vertical displacements (settlements) are 4.8 cm and 11.0 cm. The scenario-based risk assessment of the port buildings and infrastructures was initially performed taking into account the potential physical damages and corresponding losses of the different components of the port. Buildings, waterfront structures, cargo handling equipment and the power supply system were examined using the fragility models for ground shaking and liquefaction. In particular, the vulnerability assessment was performed for the 475 and 4975 years scenarios (I and II) based on the EQL and NL site-response analyses. The results from soil profile A, B or C were considered in the fragility analysis, depending on the proximity of each component to the location of the three soil profiles. In particular, for the EQL approach, the calculated PGA values at the ground surface from the total analysis cases (i.e. 2200 analyses) for each soil profile were taken into account for the vulnerability assessment due to ground shaking. For the NL approach, except for the PGA values, the PGD (horizontal and vertical) values at the ground surface were also considered to evaluate the potential damages to buildings and infrastructures due to liquefaction effects. Finally, the combined damages were estimated by combining the damage state probabilities due to the liquefaction (PL) and ground shaking (PGS), based on the assumption that damage due to ground shaking is independent and not affect the damage due to liquefaction. Once the probabilities of exceeding the specified DS are estimated, a median ±1 standard deviation damage index was evaluated, to quantify the structural losses as the ratio of cost of repair to cost of replacement taking values from 0: no damage (cost of repair equals 0) to 1: complete damage (cost of repair equals the cost of replacement). The spatial distribution of the estimated losses for buildings indicates that a non-negligible percentage of the port buildings is expected to suffer significant losses (higher than moderate). The median values of this percentage range from 7% for the design scenario (NL approach) to 37% for the 4975 years scenario I (EQL approach). This is to be expected taking into account that all buildings were constructed with low or no seismic code provisions. Among the considered building typologies, the RC structures appear to be less vulnerable compared to the steel and URM systems. The estimated losses are also significantly dependent on the analysis approach. In particular, the EQL approach is associated with higher damages and losses even for the design scenario, while for the NL approach the losses to the cranes, waterfronts and electric power substations are expected solely for the 4975 scenario I.
The systemic risk was assessed following the methodology presented in the previous section (PRA approach) taking again into account the interdependencies of specific components. It is observed that the EQL approach is associated with higher number of non-functional components for all considered seismic scenarios whereas for the NL approach non-functional components are present only for the 4975 years scenario I. As also evidenced by the estimated functionality state of each component, the port system is non-functional both in terms of TCaH and TCoH for the 4975 years scenario I. A 100% and 67% performance loss is estimated for the TCoH and TCaH respectively when considering the EQL approach for the 475 years and 4975 years II scenarios, while the port is fully functional when considering the NL approach both in terms of TCaH and TCoH for the latter scenarios. Thus, it is observed that among the four different outcomes determined for the extreme scenario for both PIs, the CI passes the stress test in the 4975 years scenario II and NL method, which could be judged as the most reliable. It is noted that the estimated PIs do not change when considering the median +1 standard deviation damage indices in the computation of the components’ functionality. However, when the median – 1 standard deviation damage indices are taken into account in the calculations, a 100% performance loss is estimated only for the 4975 years scenario I while the port is fully functional for all the other analysis cases both in terms of TCaH and TCoH.
PHASE 3: With reference to both bulk cargo and container terminals, the port obtains grade B, meaning that the risk is possibly unjustifiable and the CI partly passes this evaluation. The basis for redefinition of risk objectives in the next stress test evaluation is the characteristic point of risk, which is defined as the point associated with the greatest risk above the ALARP region. The CI receives grade AA (negligible risk), and as expected in this example application, passes the stress test for the tsunami hazard. It is seen that the CI may pass, partly pass or fail for the specific evaluation of the stress test (receiving grades AA, B and C respectively) depending on the selected seismic scenario, the analysis approach and the considered risk metric (TCaH, TCoH). Based on the proposed grading system, for the case which the port obtains grade B and partly passes the stress test, the BC boundary in the next stress test is reduced (i.e. BC: 53% performance loss) while the other boundaries remain unchanged. It is noted that different grades can be derived from the probabilistic and scenario-based assessments varying between AA (for the scenario based and the probabilistic tsunami risk assessments) and C (for the scenario-based and probabilistic seismic risk assessments). It is also worth noting that the risk objectives and the time between successive stress tests should be defined by the CI authority and regulator. Since regulatory requirements do not yet exist for the port infrastructures, the boundaries need to rely on judgments (see also Pitilakis et al., 2017).
PHASE 4: The final stage of the test involves reporting the findings, which are summarized already above and in table form in D6.1.

C1) Application of stress test concepts to industrial district, Italy
PHASE 1: In this phase of the stress test, all exposure, hazard and cost/loss data required to carry out a probabilistic risk assessment was sought, as well as data useful for the assessment of indirect losses (such as the customer base of each industrial facility). The exposure data for this case study has been provided by the industrial partner in this case study: the Sezione Sismica, Regione Toscana. A database of 425 pre-cast reinforced concrete industrial facilities in the whole of Tuscany was provided, and a smaller database covering the 300 assets in the province of Arezzo was produced. The available exposure data included coordinates, year of construction, floor area, structural type, non-structural elements, and other data useful for identifying value of contents, type of business, and extent of customer base. The data on the structural and non-structural features of the structures allowed each building to be assigned to one of 8 sub-classes (with Type 1 referring to buildings with long saddle roof beams, Type 2 to buildings with shorter rectangular beams and larger distance between the portals, V1 is vertical cladding, H1 is horizontal cladding and M1 is masonry infill). Only seismic hazard has been considered in this case study, as it is the predominant hazard to which the industrial building stock in Tuscany is exposed. In order to generate a large set of ground motion fields characterizing the seismicity of a given region, a probabilistic seismic hazard model comprised of the following three components is required: a seismological/source model that describes the location, geometry, and seismic activity of the sources; a ground-motion model that describes the probability of exceeding a given level of ground motion at a site, conditioned on a set of event and path characteristics; and a site condition model that describes the characteristics of the soil at each site. SHARE has produced a European seismic hazard model with three source models (one based on area sources, one that uses fault sources and a third based on distributed seismicity). The three aforementioned models can be used separately to produce a hazard model, although it was recommended by the SHARE consortium that these models should be combined in a logic tree, together with additional logic tree branches to describe the epistemic uncertainty in the GMPEs. The SHARE seismological models are available from the European Facility for Earthquake Hazard and Risk portal, and can be used to generate spatially correlated ground-motion fields using the Global Earthquake Model’s hazard and risk software, the OpenQuake-engine. The mean hazard map for Tuscany has been calculated with the OpenQuake-engine and the SHARE hazard model, in terms of PGA with a 10% probability of exceedance in 50 years. In order to account for site amplification, the Vs30 value of the soil at each location in the exposure model is needed. This is not currently available for the locations of the industrial facilities in the exposure model, and so an estimation of the value of Vs30 based on a proxy (topography) has been employed. For the component level stress test assessment carried out herein (wherein each industrial facility is considered as an individual component) the annual probability of structural collapse has been taken as the risk measure, and the required objective has been sought by reference to European design norms. An annual structural collapse probability value of 10-5 for the boundary A-B and 2.0x10-4 for the boundary B-C of the grading system of the STREST methodology have been selected. For the system level assessment, two types of risk metrics have been considered for the stress test: Average annual loss and mean annual rate of specific level of loss. Specific objectives for these risk metrics have not been defined by Regione Toscana, and so hypothetical values have been considered for illustrative purposes of the methodology. It has been decided to use the following objectives for the total average annual loss: the boundary A-B would be less 0.05% of the total exposure value and 0.1% would define boundary B-C. For the second objective, the mean annual rate of a loss due to business interruption equal to 7 times the daily business interruption exposure (i.e. 10 Million Euro) should not be higher than 10-4 (i.e. 1 in 10,000 years) for boundary A-B and this would be 30 days for boundary B-C (i.e. 42 Million Euro).
PHASE 2: A risk-based component level assessment has been undertaken for all 300 industrial facilities in Arezzo using hazard curves (i.e. PGA versus annual probability of exceedance) estimated with the OpenQuake-engine using the SHARE hazard model, and amplified considering topography-based Vs30 estimates, together with the complete damage structural fragility functions for each sub-class of structure that were derived in D4.3 (Babic and Dolsek, 2014; 2016; Casotto et al., 2014; 2015). For the system level assessment, vulnerability models have been developed for each sub-class for structural, non-structural, contents and business interruption loss following the methodology and assumptions outlined in D4.3. The SHARE logic tree model and the topography-based site conditions have been used to model the seismic hazard. In order to calculate probabilistic seismic risk for a spatially distributed portfolio of assets in Arezzo, the Probabilistic Event-Based Risk calculator from the OpenQuake-engine has been employed. This calculator is capable of generating loss exceedance curves and risk maps for various return periods based on probabilistic seismic hazard, with an event-based Monte Carlo approach that allows both the spatial correlation of the ground motion residuals and the correlation of the loss uncertainty to be modelled. Loss curves and loss maps can be computed for five different loss types such as: structural components, non-structural components, contents, downtime losses and fatalities. The loss exceedance curves describe the probability of exceedance of different loss levels and the risk maps describe the loss values for a given probability of exceedance, over the specified time period. Additionally, aggregated loss exceedance curves can also be produced using this calculator, which describe the probability of exceedance of different loss levels for all assets in the exposure model. The total loss results of the probabilistic risk assessment for the portfolio of industrial facilities in Arezzo have been produced in terms of a loss exceedance curve. Similar curves for each component of the loss (structural, non-structural, contents and business-interruption) have also been produced. The average annual losses (AAL) have been calculated from the loss exceedance curves and the results show that the largest component of loss is given by business interruption. The values of AAL as well as the mean annual rates of specific loss values will be checked against the risk objectives in the Decision Phase.
PHASE 3: This step of the stress test requires a comparison of the results of the risk assessment with the risk objectives, to check whether the industrial facilities pass each level of the stress test. According to the grading system of the component test, 260 facilities are assigned grade B (partly pass) and 40 facilities are assigned grade C (and thus fail the stress test). The results also show that the A-B system level assessment objective is not met as the total AAL percentage is 0.052%, but the B-C level is met. Hence the grading would be B (partly pass) for this objective. The business interruption loss at a mean annual rate of exceedance of 10-4 is 64 Million Euro (which can be translated as an average of 45 days of business interruption), and so the grading would be C (fail) for this objective. In order to provide guidance on how to mitigate the risk, disaggregation of the results has been carried out (Fig. 7d). In order to understand the indirect impact of the business interruption losses on the region and/or whole country, the customer base of the facilities that are contributing to the average annual business interruption loss can also be presented (a similar calculation could be done also for any value of loss calculated herein). 45% of the business interruption AAL is caused by facilities that have a customer base that goes beyond the province of Arezzo, and could thus cause additional indirect losses at a regional, national and international scale (in a decreasing order of importance). There are 40 facilities that failed the component level assessment and should be targeted for structural investigation and potential upgrade. They all belong to the H1 subclass (i.e. pre-code type 1 portal frame with horizontal cladding). The sub-typologies that contribute most to the total average annual losses are V2 (i.e. pre-code type 2 portal frame with vertical cladding), H1 (i.e. pre-code type 1 portal frame with horizontal cladding) and V3 (i.e. low-code type 2 portal frame with vertical cladding). Hence, in addition to investigating further the H1 sub-class buildings, the V2 and V3 typologies should also be addressed, and the customer base of the facility should also be used as a prioritization tool to identify the facilities to investigate and potentially retrofit first, in order to also reduce the impact of indirect losses from these facilities. Disaggregation of the hazard for the business interruption loss (which is also the largest contribution to total loss), has identified that a wide range of events contribute to the loss from lower magnitude close events to higher magnitude distance events. This implies that these losses are not just driven by the rare events, and thus mitigation efforts to protect against business interruption should be given high priority. Given that business interruption is directly related to structural and non-structural damage, this can be addressed through the retrofitting activities mentioned above.
PHASE 4: The final stage of the test involves reporting the findings, which are given above and summarized in table form in Deliverable D6.1.

ST@STREST has been applied and tested in six CIs in Europe, namely: a petrochemical plant in Milazzo, Italy (CI-A1), large dams of the Valais region, Switzerland (CI-A2), hydrocarbon pipelines, Turkey (CI-B1), the Gasunie national gas storage and distribution network, Holland (CI-B2), the port infrastructure of Thessaloniki, Greece (CI-B3), and an industrial district in the region of Tuscany, Italy (CI-C1). Different stress test levels were selected according to the characteristics and available resources in each case study. The objective was to demonstrate how the proposed framework is implemented in different classes of CIs exposed to variant hazards, therefore reasonable assumptions or simplifications were made in some steps of the applications. It is noted that the STREST consortium takes no responsibility in the research results provided in this report, as these results should not be considered formal stress tests.

Potential Impact:
STREST seeks to improve the security and resilience of critical infrastructures (CIs) against low-probability high-consequence natural hazards. The fundamental knowledge, methodologies and tools produced by the project provide the basis for a master plan for the coordinated implementation of stress tests for whole classes of CIs and systems thereof (including, in STREST, refineries, hydropower dams, oil pipelines, gas storage and distribution networks, harbours and industrial districts, i.e., both local and distributed CIs, all with high societal risk, be it due to direct or indirect consequences). The long-term impacts originating from the project, which will outlast its duration and ensure a structuring effect in Europe, refer to the reinforced European safety assessment capacity, improved and more reliable stress tests for CIs, support for decision making and prioritisation of mitigation options and support for preparedness, all leading to increases in societal resilience.

STREST provides best practices and robust methodologies for stress tests, in particular for the systematic identification of major hazards and potential extremes, infrastructure vulnerabilities and interdependencies, and systematic technology-neutral risk-based stress test workflow, in support of the implementation of the European policies for disaster risk reduction and the protection of national and European CIs. Furthermore, the correct assessment of risk is a pre-requisite of any long-term strategy for industrial and energy production in Europe. In a wider context, the results produced in STREST contribute to the faster attainment of the Sendai Framework target for reducing disaster damage to CIs.

The knowledge, procedures and tools developed by the project are useful on one hand for owners and operators of CIs to optimise CI maintenance and/or partial or complete replacement, develop the operator security plan and draft the regular reports on risks and vulnerability, and on the other hand for Member States authorities and urban/community planners to develop and update their national risk assessments, with the ultimate goal of increasing the resilience of CIs and societies to the effects of extreme events.

The networking with key organisations and programs in the USA and Asia (via earthquake engineering research labs from Caltech and Stanford, the Institute of Catastrophe Risk Management of Singapore) ensures the international perspective, harmonisation and knowledge transfer for the development of truly novel standards. In addition, clustering activities with previous and on-going projects (SHARE, SYNER-G, MATRIX, INFRARISK, RAIN, INTACT) on related issues gives added value to the European framework programme for research by defining a common understanding of terminology, sharing of good practice and harmonising indicators, metrics and methods. Furthermore, STREST benefits from the direct participation of representatives of a broad range of CIs and industry (consultants from the ENI/Kuwait Milazzo petrochemical plant, The Swiss Federal Office of Energy, regulator for the Valais dams of Switzerland, BOTAS International Ltd., operator of the Baku-Tbilisi-Ceyhan Crude Oil Pipeline, Gasunie Transport Services, owner of the national natural gas pipeline system, the Netherlands, Thessaloniki Port Authority SA, industrial representatives from the Tuscany region of Italy) to ensure the relevance of the products and outcomes, and the communication to the wider community.

STREST conceived a dissemination plan to transform the results and new methodologies developed by the project in protocols and reference guidelines for the wider application of stress tests (see more below). The planned activities are a key instrument for dissemination to the scientific and technical communities, as well as to policy and decision makers at European, national, regional and local levels. Overall, these activities will have an impact on the society at large, by incorporating stress test methodologies in current management and long-term planning of non-nuclear CIs, and ultimately by the enhancement of societal resilience.

Public acceptance of existing and new technologies in CIs has been eroded by a number of technical accidents and failures initiated by natural events. The coherent assessment of risk and safety enabled by the implementation of the STREST methodology and framework will allow increasing public acceptance for critical technologies and infrastructures, whereas the test applications illustrate the benefits of improved hazard and risk assessment for key critical sites in Europe. Moreover, the sensitivity analyses conducted on advanced hazard studies combining regional and site-specific assessments will enable to develop guidelines for improved surveillance capacity at CI sites and for future CI design and construction plans as well.

STREST developed a harmonised multi-hazard and risk process for stress tests and advanced the state-of-the-art in hazard and vulnerability assessment of non-nuclear CIs against low-probability high-consequence natural events (and implicitly against the more common events). It is now recommended to:
1) Promote the application of the methodology, taking benefit of the exploratory applications on six CIs;
2) Initiate a dialogue (possibly via workshops) between European civil infrastructure operators, regulators and users to establish, where needed, and harmonize the societal risk tolerance objectives;
3) Initiate the drafting of guidelines for the application of harmonised stress tests, making use of the knowledge base and tools developed within STREST.

At present, the STREST guidelines provide the best practices and methodologies, together with new scientific developments, for hazard and risk assessment. They will ultimately contribute to the objectives of the European policies for increased resilience of CIs and of the Sendai Framework for the reduction of disaster damage. Those guidelines (available in both online and print formats, via the EU BookShop) are:
RR-1: State-of-the-art and lessons learned from advanced safety studies and stress-tests for CIs;
RR-2: Guidelines for harmonized hazard assessment for LP-HC events;
RR-3: Guidelines for harmonized vulnerability and risk assessment for CIs;
RR-4: Guidelines for stress-test design for non-nuclear critical infrastructures and systems: Methodology;
RR-5: Guidelines for stress-test design for non-nuclear critical infrastructures and systems: Applications;
RR-6: STREST project policy brief.
Each report is addressed to specific groups of stakeholders, including – but not limited to – owners and operators of critical infrastructures, authorities and regulators, scientific community, technical community, public European and national administration and Civil Protection.

STREST additionally organized two workshops. The first STREST workshop was held on 29-31 October 2014 at the Joint Research Centre, Ispra, Italy. The main objective was to explore synergies between FP7 projects related to the topic of STREST, in particular in what concerns extreme events and cascades, CI taxonomy, and stress test methods. Researchers involved in those projects were considered as stakeholders in the mid-stage of STREST. The workshop was attended by more than 60 participants from partner institutions of STREST as well as of other FP7 projects, namely ASTARTE, INDUSE2, INFRARISK, INTACT, PREDICT and RAIN. The discussions between the different projects led to the following conclusions:
a) Areas where common work would be beneficial include a common approach to uncertainty estimation, the review of “good practice” in risk analysis, the harmonization of hazard indicators and risk metrics, and a wider involvement of stakeholders;
b) A panel of experts could help making sure that the methods developed in different projects are compatible and identify if they can be transposed to other projects for tests on additional applications;
c) A coordinated support action from the European Commission would be needed to achieve results at inter-project level, such as a harmonized taxonomy for critical infrastructures or a common method for cascade modelling.
The second workshop took place on 16 September 2016 in Ljubljana, Slovenia). The main objectives of the workshop were to present to stakeholders the STREST stress test methodology and final results of the exploratory application and to discuss with representatives of other research projects (INFRARISK, RAIN, INTACT) their work and possible future steps. It was attended by more than 40 participants, grouped as follows: STREST partners and associated industry partners, European research projects funded under the FP7 Security theme, Operators of critical infrastructures, European Chemical Industry Council, Research centres and Universities. The participants agreed on the need for joint dissemination beyond the individual projects (STREST and others), with a view to the development of European guidelines for stress tests. It is essential to overcome the difficulties in involving regulating authorities, owners and operators of CIs in all stages of the development and implementation of stress tests. Their cooperation is valuable for the collection of input data, the definition of common risk levels, their needs and experience in risk management of extreme events, and comments for the improvement of the stress test methodology. The widest possible range of stakeholders should be contacted and invited to interact through a number of workshops, case studies and short meetings on specific infrastructures. The benefits for stakeholders will be the opportunity to shape the stress test methodology and the acquisition of know-how for assessing, managing and communicating risk in a harmonised manner applicable to all types of critical infrastructures.

Twenty-nine articles have so far been produced by the STREST project including sixteen in peer-reviewed journals and thirteen in conference proceedings. STREST also disseminates its results to the general public via its website ( where general summaries are given and most articles are freely available for download. The results of the project are communicated to the scientific and technical communities through the participation of STREST partners in key international scientific conferences related to the project, such as the 12th International Conference on Applications of Statistics and Probability in Civil Engineering (12-15 July 2015, Vancouver), the 13th ICOLD International Benchmark Workshop on Numerical Analysis of Dams (Sep 2015, Lausanne), the 6th Transport Research Arena (18-21 April 2016, Warsaw), the 1st International conference on Natural Hazards and Infrastructure (28-30 June 2016, Greece) and the 6th International Disaster and Risk Conference (28 August-01 September 2016, Davos). More papers will be submitted in the near future, including “master-papers” presenting the main aspects of STREST on uncertainty management (EU@STREST), the stress test framework (ST@STREST) and the combined pilot site applications. Additional presentations about the final outcomes of STREST will be made at the 16th World Conference on Earthquake Engineering (9-13 January 2017, Chile) and at the 16th European Conference on Earthquake Engineering with a special session on STREST (18-21 June 2018, Thessaloniki).

The dissemination plan of STREST includes activities for the presentation of the project and the research products to the non-specialised media. Articles, including interviews, were published in web platforms and electronic magazines (Pan European Networks: Government, Horizon, The EU Research & Innovaion Magazine, XL Group insurance). The STREST project is also featured in a short documentary by Euronews within its Futuris series (Euronews is Europe’s leading channel in terms of viewers and distribution, Euronews broadcasts all over the world to 335 million households in 155 countries, in 13 languages: Arabic, English, French, German, Greek, Hungarian, Italian, Persian, Portuguese, Russian, Spanish, Turkish and Ukrainian). The documentary shows an overview of the project and its objectives, the STREST framework for stress tests and selected results of the exploratory applications (with focus on a Swiss dam):
Euronews (2016), Imagining the worst for Europe’s riskiest assets. Futuris series, 2 May 2016:
Euronews (2016), Takeaway: facing extreme risks. Futuris series, 2 May 2016:
STREST published an informational factsheet that provided general information about the project and three newsletters (December 2014, June 2015 and April 2016) that included updates on the progress in the scientific and technical tasks within the different work packages, together with information about the most important meetings and the outcome/conclusions of the dissemination workshop. Furthermore, a 32-page high-quality brochure presents the background and objectives of the project as well as a summary of the work performed on the review of the state-of-the-art, on the hazard and vulnerability assessment and on the development of the STREST stress test methodology. One section is devoted to the key results from the exploratory applications on the six test sites. The major achievements and impact of the project are highlighted together with a number of recommendations toward the development of harmonised guidelines for stress tests of critical infrastructures. The brochure includes also a list of papers published in peer-reviewed journals and conference proceedings until the end of the project.
STREST established technical dialogue with relevant ongoing FP7 and H2020 projects and identified areas where common work would be beneficial. These include a common approach to uncertainty estimation, review of good practice in risk analysis, and harmonisation of hazard indicators and risk metrics. The interaction is planned to continue through the participation to different project meetings and if possible in future projects. A coordinated support action from the European Commission would be needed to capitalise on the wealth of knowledge and tools produced within the EU Framework Programme for Research and Innovation. This would allow to achieving results at inter-project level, for instance a harmonised taxonomy across projects and types of CIs (e.g. combining energy and transportation networks), a common method for cascade modelling (e.g. applied to both geological and hydrological hazards and industrial risks), and a harmonized set of risk tolerance objectives applicable across the range of civil infrastructure types and across Europe.
A panel of experts could help making sure that the methods developed in different projects are compatible and investigate whether they can be transposed to additional exploratory applications in different sites. Moreover, the panel could investigate the causes of possible discrepancies between the results of different projects, with a view to harmonised stress test methods.
Further actions to promote transnational cooperation and the wider involvement of stakeholders, mainly operators and regulators of CIs, should be undertaken. Such actions should present how the state-of-the-art tools that were produced by STREST and other recent and on-going research projects may be used to provide scientific evidence for decision-makers to achieve a higher level of protection against the effects of extreme natural hazards, to communicate risk and mitigation measures to authorities and the general public, and to comply with the legal requirements. Through their participation, stakeholders will have the opportunity to provide feedback on their needs and experience, and thus contribute to the development of guidelines. STREST has already identified a number of European associations of CI operators.

List of Websites:
Contact Us: Arnaud Mignan, Project Manager, | Domenico Giardini, Project Coordinator,

Related information

Documents and Publications

Reported by

Follow us on: RSS Facebook Twitter YouTube Managed by the EU Publications Office Top