Flash floods or landslides, blackouts or the breakdown of public transport are just some of the impacts that extreme weather can have on society and its infrastructure. But identifying a region’s infrastructure vulnerabilities and deciding how to make society more resilient remains a painstaking task. A method called objective ranking tool (ORT) aims at supporting decision makers to choose the right options when dealing with these complex challenges. Here, Peter Prak, owner of the consultancy PSJ Security & Judgment based in IJsselstein, the Netherlands, and partner in the European project RAIN, explains how the tool works. What is so special about the ORT? The tool combines three different scientific principles. The first is a mathematical model on so-called similarity judgment, which was developed within the field of cognitive psychology. The second is called analytical hierarchy processing, which helps weighing and analysing criteria in complex group decisions. Finally, we use so-called Delphi panels for expert discussions and judgements. I chose this approach because I was looking for scientific methods to answer questions such as which is the most vulnerable station of a railway-system in terms of certain risks, like a terrorist attack or extreme weather events. How exactly does this work? The principle of similarity judgment helps define the most vulnerable objects by looking at how much these objects, such as two railway stations, have in common. But if you want to compare two objects you have to define in detail, which criteria to consider. Here, you involve all stakeholders, such as the railway operator and the police. Some of them will define criteria, to which you assign a relative weight. Then you look at each criterion to determine whether it does apply to your object, such as the station, again together with the stakeholders. What is the outcome of this process? The model provides a scorecard based on a fictive benchmark object. For example, a municipality can score badly on both resilience and vulnerability. The tool also gives you insights into the reasons why some scores are lower or higher than expected. And you can look at selected criteria, such as the distribution of food and goods in a region, which you should primarily work on in the future. This leads to further discussions. Indeed, the most important part of the process is having people around the table who really need to be involved in such decisions. What have you achieved within the RAIN project? Together with the Institute of International Sociology of Gorizia (ISIG), one of the project partners, we built a pilot to measure the vulnerability and resilience of municipalities to extreme weather. We reviewed the literature and developed about 52 criteria. We then developed a fictive region to see if the outcome was reasonable. We also tested the pilot for two real municipalities in Italy and in the Netherlands. The results provided new insights for the people in charge of the preparations and helped them make an improvement plan for the coming years. How easily can this tool be applied to different extreme weather events or different regions in Europe? Of course, preparations for hurricanes differ from preparations for heavy snowstorms. But you can have different versions of the model. It only takes a few hours to adapt the criteria to your specific issue or region. The tool can be applied in any decision or ranking process, because the principles behind it are the same. You set the criteria, you obtain a score, and then you discuss the outcome. Also, after some guidance and training, people can use the tool by themselves. What are the challenges of the approach? You have to be very clear about the questions you want to ask. Also, when describing your criteria, you should use quantitative data. But if there is no data, you have to judge qualitatively together with a team that really knows a lot about the local situation.
extreme weather, mathematics, modeling, infrastructure