Skip to main content

Security Impact Assessment Measure - A decision support system for security technology investments

Final Report Summary - SIAM (Security Impact Assessment Measure - A decision support system for security technology investments)

Executive Summary:
Today security is managed by an increasing range of diverse technological systems and measures. Giving the variety of nature, contexts and actors in which these technologies are deployed, an adequate assessment methodology requires a whole range of questions and perspectives to take into account. Where, by whom, for what purpose is a technology introduced in a certain field or area? Does the usage keep what the technological scope promised? What is the impact on security? What are the costs? Will the investment be appropriate for handling the issue in the long run? What are the unintended effects? Who are the people that will be affected? Are people treated adequately and are their personal and fundamental rights respected? How will customers and employees cope with it and adopt it? Security technologies are widely used, creating the need to raise and include questions of ethical and social implications into the assessment.

The project Security Impact Assessment Measures has developed methodologies and guidelines to aid the assessment of security measures and technologies in the context of mass transportation facilities. Such assessments need to take into account different stakeholders and their particular needs, wants, and responsibilities; the nature of the particular travel- and security processes inherent to mass transportation facilities; and the related range of legislative, cultural, economic, technical, ethical and other societal impacts that should be considered, before a new technological solution is introduced into a mass transportation facility.
The social and economic benefits of the Assessment Support Toolkit (AST) that we developed are pertinent and boil down to the following: (1) by inviting decision makers to answers sets of well-thought-out and scientifically tested questions the AST enables to anticipate a host of potential threats and costs that may jeopardize the introduction of security technologies and/or measures; (2) by requiring substantial evidence of testable effectiveness of such technologies and/or measures, unnecessary costs that have no added value are pre-empted, (3) and, finally, by referring decision makers to relevant legislation as well as ethical considerations, potential human rights violations and societal resistance, the AST allows to foresee major obstacles to the implementation of security technologies and/or measures and to prevent unwarranted trade-offs that jeopardize the legitimacy and effectiveness of such technologies and measures.
Section 2 of this final report documents the model for assessment processes that forms the basis for both the AST and its encompassing methodologies. The latter are described in more detail in section 3. Section 4 elaborates on the potential socio-economic and wider societal impact of SIAM.

Project Context and Objectives:
Security Impact Assessment Measures (SIAM) is an EU-funded research project. SIAM was developed by a European consortium of six universities and four associated case study partners. The consortium was coordinated by Technical University Berlin and entailed the Kingston University of London, the Free University of Brussels, the University of Kassel, the University of Tel Aviv and the University of Torino. In order to ensure the inclusion of end-user perspective on security impact assessments into SIAM, Metro Torino, Airport Berlin Brandenburg, Ben Gurion Airport Tel Aviv and Deutsche Bahn AG were associated partners of SIAM. The overall objective was to create an assessment support system that takes the complexity of technologies, economic aspects, cultural differences and societal dimensions into account. Thus SIAM wanted to create a holistic assessment methodology for SMTs.

According to the variety, there are diverse assessment approaches. What characterizes them is their mutual delineation of each other. There are approaches that measure the technological functionality and effectiveness against the promised security gain. There are others that take health effects into account, and again others that focus on discriminatory effects and on freedom infringements. But to highlight one focus could mean to dismiss others and thus to produce outcomes which lead to wrong conclusions. Therefore SIAM took an innovative approach towards impact assessments that provides stakeholders with an assessment support system that promotes an in-depth understanding of diverging perspectives of stakeholders that are involved, affected by and concerned with security technologies. Furthermore it aims on increasing reflexivity of the individual and enabling social learning among different stakeholders through an active participation. It also supports a holistic assessment of security technologies in a model that encompasses as many perspectives as possible in a well-structured and easily understandable manner.
In the following, the overall approach towards impact assessments that forms the basis for the SIAM AST is being discussed. The graphic shows this approach as a whole and will be followed by individual paragraphs describing the components of SIAM.

2.1 A database of questions
In a guided assessment the user will be presented with a pre-defined sequence of questions corresponding to his or her role in the assessment procedure. The set of questions evolved through the analyses of various SMT projects within the case studies. By reconstructing those projects actors and their preferred criteria to assess such SMTs became evident. Such assessment criteria could then be formalised into questions that have been asked. Additionally by conducting interviews and literature reviews supplemental questions that should have been asked could be identified and integrated in this set.

2.2 The assessment process
The SIAM project followed a process-based understanding of assessments. It did not develop a one-size-fits-all procedure with a strictly defined outcome. It rather developed, based on case studies, a systematic model for assessment processes and a set of tools and methodologies which can be used to conduct an assessment.
The assumption that following a standardized assessment procedure would guarantee a certain outcome like enhanced security or increased acceptance seems not valid for us. The SIAM tool and methodologies have a more limited ambition and are meant to assist in broadening the scope of the assessment.
The innovation journeys conducted by the SIAM project give a good rationale for such broadening of scope (see the deliverables in work package 2). A limited assessment, through the fast closure of issues under discussion, a limited set of options under consideration or a limited involvement of stakeholders, carries the risk to exclude relevant issues from consideration too fast. In a later stage, when the major design or investment decisions are already made, such issues can pop up again as having relevant implications on the use of the technology and forcing changes which are in this later phase much more costly. An early consideration of a broad scope of issues and allowing a broad set of stakeholders to raise such issues, maps out all decisions to be made and allows a more comprehensive balancing of costs and consequences of the choices.

Figure 2: Innovation Journeys

This approach of assessments rests on the observation that any assessment is an inherently political exercise. While assessments carry with them the promise to increase the rationality of decision making, they always include decisions about what needs to be known, what can be ignored and what is important. We hope that the SIAM tool and the methodologies contribute to increase the rationality of these decisions, but again we emphasize that they cannot rationalize the whole decision making process. On the other hand, considering assessment as a political exercise does not reduce it to a negotiation between mere political beliefs. Impact is not a value-free notion, but stands for the secondary side-effect on relevant interests. Impact assessment is a tool to produce knowledge about the impact on a set of interests considered relevant. This set of relevant interests can be assembled in different ways. Legal recognition of such interests, e.g. in human rights law, is such a way. Looking at the opinions of passengers and travellers or other stakeholders is another. The impact on these interests can be methodologically assessed.

2.3 The STEFi-approach
SIAM’s approach to incorporating a wide range of such interests is the STEFI-approach. Out of the wide range of assessment criteria collected through the empirical research, four core assessment dimensions have been identified that help to structure the field: Security, Trust, Efficiency and Freedom Infringement.
• Security Dimension
Security in a narrow sense describes the functionality of a product in countering threats and reducing risks. It covers the questions of whether the product fulfils promises and expectations regarding its performance. Evaluation criteria, amongst others, are the detection rate, the false alarm rate and the impact of intended interference or environmental interference. In a broader sense, security is what the stakeholders perceive and define as security. This societal dimension of security has also been included into the assessment support tool by including questions that highlight the socially constructed and changing character of security.
• Trust Dimension
Trust encompasses the experience of the product provider as well as of the scrutinized in using the product. Beside the experience, the subjective perception defines the way in which a product achieves an appropriate acceptance level. Assessment criteria for the Trust dimension include, for example, the degree of discrimination regarding the use of product and the potential physiological and psychological invasiveness of the product. For instance, health risks such as DNA damage associated with the ionising radiation used in body scanners or other effects such as claustrophobia and anxiety attacks.
• Efficiency Dimension
Efficiency implies the economical dimension of the product. Evaluation criteria for this dimension are the product’s life cycle costs, such as purchasing costs, implementation costs, operating costs and disposal costs. It also contains derivative criteria such as opportunity costs and the impact on business processes, such as the through put or false positive alarm rates.
• Freedom Infringement Dimension
The Freedom infringement dimension of security product evaluation depicts the impact of a product on the freedoms and rights of persons. One of the main impacts of security products and services is enhanced personal data collection, processing, sharing and retention. This affects the rights to privacy and data protection. Additionally, security products have a tendency to affect other rights such as the right of self-determination, right to the freedom of movement, right of association; these must all be taken into account in the evaluation of security products.
These overlapping dimensions provide for a systematisation of socio-technical security regimes. They involve different, often contesting perspectives and activities. However, instead of defining them in very abstract terms, the respective systematisation approach here is different, not distinctive by definition but by resemblance. Its idea is not to wipe out, but first to assemble the differences between related aspects or criteria, notions or concepts as they occur in the field. E.g. numerous notions of security are in use, often depending on the community an actor is associated or associates herself or himself with. An engineer responsible for the computer infrastructure at the airport has a different understanding of security than a police officer responsible for the airport premises or the airport management focusing on passenger convenience. Thus, instead of claiming to cover all possible notions, the dimensions rest on the resemblance of concepts. It is even more important from a methodological point of view to ensure some conceptual flexibility as new notions might become relevant while old ones lose their meaning. The use of the four dimensions is to structure the field on a first level. However, at the same time their conceptual indeterminacy shall allow the widest range of involvement possible to discuss assessment criteria and attributes as well as their mutual relationships.

2.4 The SIAM assessment support tool
The STEFI dimensions allow for a flexible sorting of assessment criteria, which can in turn be amended in from case to case and as assessment procedures proceed. The SIAM AST contains a set of questions structured along the STEFI dimensions. A database editor allows for the addition of new criteria and questions emerging during the use of the tool. The AST is also designed in a way that allows changing the whole set of question to adapt the AST towards other settings than mass transportation systems. This openness of the AST makes it flexible in its use and its functionality.
Following a process-based understanding of assessments, the AST does not provide for a one-stop impact assessment as an outcome or a product. It is rather a tool to assist assessment processes from the beginning on. The tool bears two core functionalities:
1. A consultative functionality. This means that it suggests assessment dimensions and questions that should be integrated into the assessment process.
2. A knowledge management functionality. This means that ongoing assessment processes can be managed and documented using the tool. It structures collected knowledge given by various actors and perspectives along the assessment criteria and presents them accordingly to the assessment participants.
The AST in its current state is focused on the assessment of one specific technology, but it could be developed further to allow for the assessment of a broad range of options to address a problem.
2.5 The focus of the SIAM-project: Security measures and technologies at public transport systems
The empirical research in SIAM and the testing of the AST was focused on security measures in transport hubs like airports or railway systems. A typology of these security measures was developed, which is used as a structuring tool for the AST development. The typology and the use made of it in questions raised in the AST present a balance between keeping a broad scope of technologies which can be assessed and allowing to address concrete problems or issues. The AST gives the possibility to raise new questions, through which a more detailed assessing of the technology at hand is possible.
The security measure technology (SMT) typology has been developed by Kingston University London. It is described in more detail in SIAM deliverable 2.3 and categorises those technologies which address malicious threats to people and their physical infrastructures. The manifold technology options that can be implemented for achieving a reduction or prevention of risk can be categorised into four major classes depicting its major purposes. The AST then enables the subcategorisation into nine categories to narrow down the technologies’ functionalities:

Figure 3: SMT typology
The typology can be summarised as follows:
Threat Detection
Threats are processes which are intended to damage infrastructures and inflict casualties. As in any process they are composed of actors, activity and tools each of which can be addressed by a type of SMT.
• Object and Material Assessment SMTs (also: Screening SMTs) are used within security measures to search people, luggage, cargo and airport deliveries to identify possible dangerous or illegal objects and substances e.g. weapons, drugs, or explosive residue.
• Event Assessment SMTs attempt to identity an unfolding incident or reconstructing it by, for example, using CCTV to detect suspicious behaviour or to spot abandoned luggage.
• People Assessment SMTs are used in measures designed to identify potential malefactors. This includes questioning strategies, profiling methodologies such as background checks of passengers, or asymmetric screening based on demographics.
Access Control
A key component of security policy is to physically restrict access to those with a right to access. Maintaining the integrity of facility will require establishing identity and the right to access, and ensure those with no right of access are excluded.
• Identification SMTs are used to identify people as part of security measures designed to establish access rights.
• Physical Access SMTs relate to the broad category of physical barrier and access technologies such as turnstiles, perimeter fencing, and automated car park barriers.
In general terms, the SMT category Policing refers to those technologies used to maintain an understanding of what is happening within a controlled area and to those technologies which enforce compliance.
• Situation Awareness SMTs includes the use of CCTV to monitor an environment and liaise with staff on the ground, and the use of asset management solutions such as Radio frequency Identification (RFID) tags and readers to track baggage and passenger movements or automated number plate recognition ANPR technology to identify vehicles.
• Enforcement SMTs are technologies used in security measures that respond to some process deviation or detected threat such as ensuring passenger hand luggage is screened or dealing with a detected weapon.
Some technologies are used to enable and support security processes yet do not present security technologies in their own right. This general category refers to technologies for controlling and enabling the general function and performance of security measures, the information and communication systems and processes that underpin any security system.
• Process Control SMTs capture the range of technologies that configure the security process including passenger flow, and the selection of security measures applied to individual passengers.
• Information and Communication SMTs capture the computing and communication technologies used for a variety of different security measures within any security regime, such as those which can be found in devices and algorithms for information processing, as well as data transfer and storage.
Following the brief description of the individual components of SIAM, the next chapter will go further into details about the underlying concepts and the developed methodologies of this project.

Project Results:
In this chapter the key results will be presented in details. Here, the AST is not in the focus but rather the methodologies that have been developed and are crucial for a successful comprehensive assessment of SMTs. Hence the problem disclosed in SIAM was often a lack of knowledge how to find the answers on the questions that are being presented within the AST. Therefore the consortium developed utilisable methods for users of the AST. Those are either part of the AST, for example the threat assessment, or are provided for guidance in the toolkit and methodology handbook.
3.1 The SIAM Assessment Support Tool
In 3.1 the sequence of the assessment support tool will be briefly explained. A set of screenshots with the most important screens of the tool is provided in the annex (7.1).
3.1.1 User level
The assessment support module is the central functionality to be implemented in the prototype. Essentially, this part of the software is a front-end to the SIAM database with which users engage in assessment activities. This part of the software will be realised with the provision of different screens that provide active guidance to the user, i.e. a Wizard-like interface that ensures all necessary inputs are made and in the right order for the next activity to proceed. An overview of the first design of an operationalised assessment support process can be seen in the Appendix of this document.
This main module of the SIAM has been further structured into three units. The information specified and collected within each unit will contribute towards the final assessment report that is being generated as an output of the whole module.
Assessment configuration
In the assessment configuration part provides input masks that facilitate the specification of the problem context for which technology options need to be assessed. Several screens are used to collect all the information that comprise the scenario context for the assessment such as details about the facility for which an assessment is made, the threat to address, the technology options considered, and the corresponding stage in the technology acquisition process. In a second step the user will select the actors that are to be involved in the process and provides information about them and him- or herself.
Information gathering
The information gathering unit uses the configuration information of an assessment case to gather answers to pre-defined assessment questions from the actors. The display of these questions is controlled by certain relevance criteria, such as the technology considered in a case, or the types of actors at which certain questions are targeted.
Assessment reporting
This unit will create documentation useful for further evaluation and management activities in the overall assessment process. The assessment leader can generate a report which details the problem context, indicates the overall level of progression in the assessment process, and summarises all information collected, grouped by the different tasks and perspectives involved. Further, some assessment scores are provided for the guidance of assessment activities. Whenever further assessment progress has been made, the assessment report can be re-created at any time.

3.1.2 Administration level
User administration
This part of the tool provides facilities for system administrators to create and manage user accounts, and set specific options for each user. For the prototype, a comprehensive system of user permissions will not be required. Here, having individual user accounts serves the primary purpose of being able to create actor-role configurations that can be associated with the user-accounts. In practice, this means that different actors and/or stakeholders could be enabled to log on to the system, participate in the assessment process, or address tasks that have been assigned to them by other users. The prototype does, however, distinguish between different user types, where “Contributors” represent normal SIAM AST users, whereas “Administrators” and “Developers” can also access the SIAM AST Administration Console.
Question Pool Administration
This part provides a comprehensive authoring toolkit for creating and editing screens and screen groups associated to the assessment questions in the system, and setting relevance criteria and conditions.
Advanced Database Options
There are a number of advanced database operations available in the Administration Console that allow modifying and adapting the systems’ core data in order to allow installers to exploit its basic mechanisms in different contexts. These options are primarily intended for maintenance purposes, and to facilitate the continued use of the toolkit as a commercial or research tool after the SIAM project.

3.2 Structuring the Assessment Process
In work package two the consortium analysed the case studies and how existing SMTs were being implemented. The result was that a technology is not given, but rather the result of many process assessment activities of different actors during the invention, implementation and adoption phase. The assessment activities of actors decide, which questions are being asked, what answers will be given and what aspects of a technology or a measure will be emphasized and analysed. They also decide which criteria will be used to assess SMT solutions. Over time the assessment activities lead to emerging irreversibility and it gets increasingly difficult to change or modify the chosen assessment way.
As a consequence it is important to involve as many stakeholders as possible at a very early stage and it must agree upon a common list of criteria for assessing the SMT. However, the SIAM research analysed multiple assessment processes in various contexts and business cultures and was able to identify an ideal-typical procedure for the adoption of a new SMT:

Figure 4: Optimal structure of an assessment process
3.2.1 Gathering information about SMT options
In SIAM two general ways to achieve a market overview in the security technology sector were identified. The first way is one where a decision to invest into a certain technology has already been made. In the case studies this happened mainly when actors had a particular interest in an SMT, for example to be very innovative and modern. Thus the SMT was less important than its reputation. The best practice example on the other hand was in one case in which a neutral scientist was being asked to give a short overview about potential technologies that have reached marketability. This scientist had no affiliations with the site and no personal or economical aim to achieve by favouring a certain SMT.
3.2.2 Defining and involvement of all stakeholders affected by the SMT option
The process of implementation of new SMTs affects various actors within but also outside the organisation. Some actors are rather obvious to identify, for example organisational units such as accounting and IT. But some are not so obvious such as representatives of customers and employees.
To find such actors, SIAM reconstructed “innovation journeys” in the case studies. In practice these journeys go backwards in the implementation process starting from the final introduction of new SMTs. Protocols of meetings and tenders were analysed and interviews conducted. Through these activities the identification of incoming and leaving actors in the process and the discovery of conflicts were possible.
3.2.3 Discussion and decision on common assessment criteria
The assessing of SMTs requires a common understanding on a set of criteria. In practice this step is closely connected to the involvement of all stakeholders. As difficult it is to acknowledge all relevant actors as harder it is to incorporate their heterogeneous interest driven assessment criteria. Often these criteria derive ad hoc and are divergent in their importance for the actors. A best practice that could be identified in SIAM was the evolvement of a criteria list that has been circuited within the actors group. All actors were able to assess, add and weigh the criteria. The result was an agreed common list of criteria that respected the different importance to each individual actor.
3.2.4 Testing and problem solving of the SMT option and evaluation of the assessment criteria
In this step the SMT should be tested in a limited space and for a limited time. At this stage all actors will be able to identify the affects this new SMT might have on their activities and their interests. This phase should also be used to reassess the chosen criteria. Did new criteria come up? Did the test show a different importance of criteria?
3.2.5 Evaluation and comparison of technology options
The assessment leader will receive all statements and assessments by all participating actors and will have the basis for an evaluation and comparison of the technology options.

3.3 A systematic approach towards threat assessment
In work package 6 the Vrije Universiteit Brussels and the Tel Aviv University developed a methodology for assessing threats in a comprehensive and structured manner. In the following this systematic approach is being presented.
Decision makers need to be aware of possible emerging and developing ‘security threats’ as this constitutes an important part of evaluating SMTs. ‘Security’ refers to a field of practices that intend to provide prevention capabilities and protection against deliberate acts and/or the intentional infliction of harm. Threats, as they are usually conceptualized in risk assessments, refer to the likelihood that a specific type of attack will be initiated against a specific target.
Ideally, we gather data that allows us to assess the likelihood that a terrorist attack, an organized crime attack, a cyber crime attack, a deliberate act of fraud or corruption and a deliberate breach of immigration laws will be initiated against or in a site in an airport or mass transportation context and/or that these sites will be used for such purposes. But such data is not always available. Traditional risk assessments to determine what will most likely happen or what the most likely scenario will be, can not always be used. Possible targets cannot be assessed in this manner. The security threats we face are adaptive and flexible and their covert nature makes it hard to assess capabilities, intent and exact time of occurrence.
Another option is to develop scenarios that give insights about possible targets for terrorism, illegal migration, cyber crime, transnational organized crime, transnational white collar crime, as well as the impacts those targets could suffer. Scenarios, however, are a rather ambiguous notion. Scenarios are never predictions. They never depict what the future will be like or how events will play out. Scenarios are most often used to create a common vision and a shared understanding of a strategic issue. They are useful to uncover and question decision makers (strategic) assumptions, to make them aware of issues or perspectives they don’t consider and to show what the consequences might be if these issues are left unattended. Scenarios thus invite decision makers to be reflexive about the strategic issues at hand.
In its final report, the 9/11 Commission concluded that improving decision making in security issues in terms of foresight and imagination is not about finding an expert who can imagine what plans might be used for terrorist purposes. It is about improving the quality of decision making. This requires decision makers to make informed decisions, include information from multiple perspectives, so that tunnel visions and rigidity in decision making can be avoided.
Such informed decision making on security challenges can be based on existing threat assessments. These reports provide information about emerging and developing threats. As these reports focus on transnational threat trends, they do not provide in detail information about threats to the airports and mass transportation sites. What needs to be explored are the implications of the documented threats to the specific site under consideration.

To develop a pragmatic, easy to use methodology allowing us to draft scenarios in a structured and systematized manner, we turned to a qualitative risk assessment methodology that was developed for public transportation purposes in the EU – ‘Counteract’ project. Again, this methodology was not chosen to assess what will happen, or to quantify what is actually most at risk and so forth. What this methodology allows us to do is understand what local security experts and decision makers consider to be a threat, what the effects of these threats are to their setting, and how they believe SMTs might help mitigate or prevent these threats. Making the understanding, assumptions and evaluation of threats of experts and decision makers explicit, helps to improve the discussions during the assessment process. In the following section we will elaborate on this methodology.

Guidelines for scenario workshops
Objectives and tasks
The scenario workshop shall have the following objectives:
1) To reveal the assumptions of decision makers and security experts about concrete threats. For example what do threats like terrorism, organized crime, cyber crime, or white collar crime mean? What threats in these fields do they prepare for? What do they expect? What threats are not considered, and why?
2) To reveal assumptions about assets, vulnerabilities and targets. What targets and vulnerabilities do experts identify? What do they believe is most at risk and why? Why are some assets not believed to be a target or at risk?
3) To reveal assumptions about the role that SMTs can and should play in preventing or mitigating the identified threats.
4) To reveal assumptions about the impact and the consequences of the unfolded threat, and the strategies that are believed to be useful to respond to these threats.
5) The methodology suggests that the process will point to two scenarios that need to be developed. One scenario will be focused on the case of which the experts believe its occurrence is highly unlikely, but of which the impact would be disastrous if it did occur and which therefore needs to be prevented even if unlikely to happen. The second will be a scenario of which the experts believe its occurrence is very likely and the impact disastrous. Both these scenarios will be further developed as narratives that describe the unfolding of these events. Again, the point of the exercise is not to forecast, calculate or predict the future. We do not intend to assess or quantify what will actually be very likely to happen or what is highly unlikely to happen. The point is that one has to come to understand why decision makers feel that a scenario is very unlikely or very likely as this will provide insights about how decision makers assess threats, what they believe to ready for and why. It would be possible to understand how they reflect on the consequences and impacts, and how they believe these threats can be mitigated or prevented.

In the following we present the methodology to be used to organize a scenario workshop and conduct it.
1) Preparation of the workshop
- A mix of security experts specialized in relevant security fields/threats and security implementors will be invited. Overall around ten participants should be expected.
- A list of actual threats and assets should be prepared to contribute to the workshop. Discussions will be held around these points.
- An expert will be nominated to guide the workshop and to report on the conclusions.
- Workshops could be complemented by interviews to gain more information.
2) STEP 1: What are the threats?
The first step involves a short brainstorm with the participants of the workshop about threats concerning the discussed topic: e.g. “If you think about terrorist threats at an airport, what would such a threat be exactly”? What needs to be identified are very specific actions. For instance, from the transnational threat assessments we know that one of the most important trends in terrorism is that they are carried out in terms of an individual or a number of armed individuals conducting an assault or a raid against large crowds. The threat is then “armed assault/raid”. Other examples for terrorism are: shootings, improvised explosive devices (IED), dirty bomb, chemical weapon, biological weapon, etc. Some overlaps can be expected, e.g. can cyber crime also be considered as organized crime or terrorism. Corruption or fraud can be a threat when discussing illegal migration and so forth. These overlaps in themselves will provide important insights on the process.
3) STEP 2: What are the assets, vulnerabilities or targets?
The second step is the identification of important assets, targets, vulnerabilities. Objective is to be as specific as possible and to focus on identifying assets, based on the importance of their mission or function, and groups of people that are believed to be at risk and to focus on the significance of structures. A useful concept which helps surface such locations are nodes: these are locations (virtual or physical) where flows of goods, people, energy (e.g. fuel, electric power units), capital, information come together. An airplane, a security control centre, the entrance hall of an airport, a train station at an airport, banks, are all examples of nodes.
4) STEP 3: a matrix creation.
In the 3rd step, the previous steps are combined to create a matrix. (See example for terrorism in fig. 1)

Figure 5 - Threat and Assets
Assets Chemical weapon Armed assault IED
Entrance hall
Security control centre
Underground station with trains

5) STEP 4: assessment of the probability of occurrence
- For each threat-asset participants need to reach a consensus about the probability of occurrence they attribute, ranging from very unlikely to very high.
- Probabilities are collected

The 4th step requires the participants of the workshop to indicate what they perceive to be the probability of occurrence, ranging from very unlikely to very likely. The point of this step is not to surface actual probabilities. This is a qualitative assessment in which we attempt to surface decision makers’ or experts’ assumptions about what they believe to be very unlikely, low, possible, high, or very high probabilities of occurrence of a particular threat (e.g. see fig. 2). As such, this exercise allows us to understand what threats they believe to be possible. For each square in the matrix, the workshop leader needs to reach a consensus among the participants about the assessment that is made.

Figure 6 – Example for terrorism - Likelihood
Assets Chemical weapon Armed assault IED
Entrance hall Very high high possible
Security control centre low Very unlikely Very unlikely
Airplane with passengers on air side Very unlikely Very unlikely Very unlikely
Underground station with trains possible possible possible
6) STEP 5: assessment of impact and severity
The 5th step requires the participants of the workshop to indicate what they perceive to be the impact and severity of each threat. As we are facing quite different security threats, their impact and severity should be assessed in terms that fit the nature of the security threat. The categories used to measure severities are: uncritical, marginal, significant, critical, disastrous. We introduced the following societal dimensions that could be affected if a threat is realized: People, Infrastructures, Environment, Economy, Political systems and Values (see details in the following). Not every dimension is relevant in each case. For organized crime for example we can focus on the profit that can be made, the political system and Values. Once the societal dimensions are determined the impact on them can be assessed and the severity estimated. The severity would then be an integrative evaluation of the impacts assessed to get a severity level in the scale of 1-5. Aim of this step is to reveal decision maker’s or expert’s assumptions about what they believe to be the impact and severity of each threat within the limits defined in the previous paragraphs.
Figure 7 Example for terrorism – Likelihood and severity
Assets Chemical weapon Armed assault IED
Entrance hall Very high high possible
disastrous critical critical
Security control centre Low Very unlikely Very unlikely
disastrous critical disastrous
Airplane with passengers on air side Very unlikely Very unlikely Very unlikely
disastrous disastrous disastrous
Underground station with trains Possible possible possible
disastrous critical disastrous

7) STEP 6: creation of risk categories
The 6th step involves the creation of risk categories. These categories are created by combining the probability of occurrence with the impact/severity assessment. The scores that are given thus refer to the multiplication of the probability of occurrence with the severity. This table is only intended as a tool to visualize and prioritize the different threats (see fig. 4). This step is important for the selection of scenarios that need to be developed in step 7, and in addition to the scenario development the table will provide valuable information for the draft of the combined scenario threat report. In this stage we asses the severity of the threat based on its impact on several societal dimensions as detailed in step 7. As seen in the table we actually use scales of 1-5 to assess the probability of occurrence and the severity of the threat. See also step 7 section 3 for further explanation of the impact assessment.

Figure 8 Risk assessments for terrorism

Probability of occurrence
Very high (5) Chemical weapon / entrance hall (20)
High (4)
Possible (3) Armed assault on underground station with trains (9)
Low (2)
Very unlikely (1) Airplane / IED (4)
Uncritical (1) Marginal (2) Significant (3) Critical (4) Disastrous (5)
Severity / Impact

8) STEP 7: writing scenario narratives
- For each category two scenarios are selected. The first scenario is a ‘very high/disastrous’ scenario. The second scenario is a ‘very unlikely/disastrous’ scenario.
- The scenario narratives provide information about the scenario unfold as well as information about the reasons for that expert assessment.

The 7th step is the final step in which two scenarios are developed as a narrative. The information that can be derived from the entire process are detailed in the following section.
Scenario framework
Scenarios are built using the following building blocks
1) Events unfold
The description of how the events unfold should contain the following elements:
- Threat actor(s): Who is involved in a threat?
- Threat activities: How is a threat being carried out exactly?
- Threat tools: what weapons or other tools are used?
- Threat technologies: The name of the general technologies associated to the activities or tools)
- Assets, vulnerabilities and nodes: describe the setting; and explain why they are felt to be assets, vulnerabilities or nodes

2) SMTs description
The description of SMTs should include the following elements:
- Existing security measures at the location
- Security actors: Who supervises or operates in this area? Who has which responsibilities?
- Security activities: Which security procedures are in place?
- Security tools: What kinds of devices are used?
- Security technologies: Type or category of technologies

3) Impact of the threat that unfold
The description of impact/severity refers to the consequences of the unfolded threat. In the 5th step we applied a broad definition of impact/severity which was directly related to the security threat. The scenario narratives allow us to widen the scope of consequences. We will not only focus on the immediate consequences of the threat (e.g. for organized crime we suggested that we assess the impact in terms of the profit that was made). In addition to these consequences, in this phase scope can be widened and participants can reflect on the following focal issues (societal dimensions) to surface that information (not all issues apply for each security threat):
- People: what are the consequences of the threat to the people involved?
- Infrastructures: what are the consequences of the threat to the infrastructure?
- Economy: what are the economic consequences of the threat?
- Environment: what are the consequences of the threat for the environment?
- Political system: what are the political consequences of the threat?
- Values: what are the consequences for societal or ethical values?
The impact on the several societal dimensions helps assign the severity score. This is a qualitative process to finally evaluate the severity level.

4) Proposals to mitigate or prevent the threat from unfolding again
The description of proposals to mitigate or prevent the threat from unfolding again requires the participants of the workshop to reflect on how they would respond after the threat has unfolded.
- Security actors: Are there additional employees required? Should additional responsibilities be assigned to existing actors? Should we recommend certain actors to cooperate with other actors?
- Security activities: Are there any changes to make in certain procedures? Do we need new procedures to cover for the threat? If improved cooperation between certain actors is suggested, what exactly does this mean in terms of activities?
- Security tools: Can the existing tools be configured to cover the threat? Do we need to acquire new devices or tools for screening, identification, etc?
- Security technologies: The general name of the (new) technologies associated with these activities or tools.

3.4 A systematic approach towards assessments of the effectiveness of SMTs in terms of security

Work package seven (TUB) has shown that crime maps are not simply tools representing an objective image of crime but are tools incorporating both previously existing data about crime and ideas how it should be dealt with. Crime maps include various strands of criminological crime pattern theories and are being used as tools for the planning and allocation of resources. The different rationales point towards two questions related to the use of crime pattern analysis:
1. In the context of urban train transport security, it provides answers to the question where resources should be allocated.
2. At airports, most policing resources are already available at the different security areas, so the question here is not so much on where to use them, but on how or against whom to use them.
Different rationales of selecting and using security technologies in the contexts of urban train transport security and airport security can be distinguished and analyzed. One characteristic is the different impact of crime pattern and threat pattern analysis. It serves mainly for the identification of so-called 'hotspots', the types of crime that are recorded at these locales and a more or less vague categorization of victims and offenders in urban train transport security. In the context of airport security, threat pattern analysis typically leads to a profiling of passengers. Another major difference is the emphasis being made on passengers' perception in urban train transport security discourses, while economic considerations are being emphasized in all areas of airport security.
Both questions imply different definitions of security and different dimensions of trust, efficiency and freedom infringements. This affects the way that the behavior of passengers becomes normalized and that groups of people are targeted and excluded. Perhaps the most obvious difference is the basis for interventions in both rationales. In order to better understand the basis, it is helpful to distinguish between the anticipative concepts of precaution, pre-emption and preparedness and prevention. The three concepts stand for a gradual decrease of the threshold for interventions, thus bearing the potential increase of freedom infringements compared to preventive security measures. The rationale that has been analyzed for urban train transport security can be characterized as preventive, whereas airport security increasingly becomes anticipative.
Work package three (TUB / UNEW) has investigated how the effectiveness of security measures and technologies can be and is being assessed in terms of increasing security. This research has brought up the inherently political dimension of impact assessments and highlighted some of the ambiguities at play when it comes to determining frequently occurring and dangerous criminal actions as well as the evaluation of the impact of security technologies on security. Frequently occurring criminal actions like theft do not necessarily spark the introduction of new security technologies. The latter requires the construction of dangerousness of criminal actions that involves changes in a certain context/space, where resources are being contested and where public imaginations of dangerousness come into play, creating a demand for an altered way of policing. Technology is often a quick answer in such a case. At the same time, it is often unclear or forgotten what exactly the initial question was that has led to the answer. Assessing the impact of security measures and technologies on security often leads to the question: Technology is the answer, but what is the question?
Assessing the impact of security technologies on criminal actions raises questions about how security is understood and how technologies are thought to relate to security. Three ways of managing this area of ambiguity have been reconstructed in work package three. In the first case, security remains a contested concept and the impact of a technology on security remains vague. In the second case, security has been defined as an 'adequate' problem and the impact of a technology can be clearly assessed. In the third case, a security problem is being constructed in order to provide a use-case for a technological solution.
Generally speaking, it is important to distinguish the rationale of the security measure from the beginning of the assessment on. For example, crime prevention does not necessarily involve the detection of crime. This is crucial for the assessment of the effectiveness of security measures because it appears that the detection of crime is the least likely use case of security measure technologies. Rather, the most likely use case is the detection of suspicious and possibly threatening actors, tools and activities. Much more difficult to measure, the likeliest effect in terms of prevention is the interruption and the so-called general preventive effect, which is difficult to measure.

Summarizing his experience of the political dimensions of security technology assessments, Brian Rappert has suggested that
„a fruitful line of analysis regarding the relation between technology and politics is to examine the way in which the ambiguities associated with technologies are managed, and the manner in which the distribution of ambiguity helps constitute technology.“
A methodology to assess the impact of security measures and technologies should therefore aim to understand how knowledge about the assessment is being produced and how this shapes the overall result of the assessment. This involves both the consideration “of the adequacy of the approaches offered, and their ability to inform practical matters.“
Work packages three and seven have provided important requirements for the development of a methodology to assess the impact of security technologies on security. The methodology requires the stakeholders to understand how a certain way of assessing the impact of a technology is constituted and how it has become dominant. This means to understand and critically reflect the overall security narrative that is inherent to the impact assessment, including how crime is being imagined. The narratives to be reconstructed should be analyzed in terms of how a certain way of assessing the impact of a technology on security becomes dominant, including how security is being understood. The following questionnaire can be used either for a number of interviews or for a workshop with end users, security personnel and other stakeholders in order to produce the data needed to reconstruct the narratives:

Security Impact Assessment Questionnaire
The following questionnaire can be used either for a number of interviews or for a workshop in order to produce the data needed to reconstruct the narratives. The narratives to be reconstructed should be analyzed in terms of how a certain way of assessing the impact of a technology on security becomes dominant, including how security is being understood.
1. Mapping frequent and dangerous criminal actions
◦ What are the most frequent criminal actions?
◦ Why to they occur frequently?
◦ What are the most dangerous criminal actions?
◦ What makes these actions dangerous?
2. Available Security Measures and Technologies (SMTs)
◦ What kind of SMTs are being operated to deal with these criminal actions?
◦ Are there any major technological innovations that have been introduced?
◦ Is any technological innovation expected that will enhance the possibility to deal with these criminal actions?
3. Impact of SMTs on criminal actions
◦ In which way have the SMTs contributed to security, and are there different dimensions of security affected?
◦ What is the impact of SMTs on crime? How were the number and the nature of crime affected since the SMTs are in place?
◦ How is the impact of the SMTs on threats and crimes assessed / measured?
◦ When is an SMT ineffective and does it not improve security as foreseen?
◦ How do notions of crime and security change in the course of the introduction of SMTs? Has the way of measuring and assessing crime and security changed?
◦ Which unintended consequences have been observed after the implementation of the specific SMT?
▪ Unintended Consequences on criminal actions?
▪ Unintended Consequences on freedoms?
▪ Unintended Consequences on organizational routines (function creep)?
◦ To what etxent have the promises of SMTs been delivered 
3.5 A systematic approach towards assessing freedom infringements

In WP4 and WP8 a methodology was developed to map and assess how a SMT infringes freedoms.
In order to describe the normative impact of technology the notion of technological normativity was developed. It understands normativity to be not limited to the legal field. The normative impact of technology is situated in the way that it induces or enforces certain types of behavior and/or inhibits or rules out other types of behavior. Technological normativity looks at how technology impacts on the behavior of people and as such has a similar effect as legal norms.

Dimensions of technological normativity
To map this impact 4 dimensions of technological normativity are discerned: scope, intrusiveness, coerciveness, distribution.
Scope has been defined as the normative impact of a security measure in terms of space and time. At first glance, many measures seem to impact in a locally contained way. Physical security measures often last only for a few minutes. But in some cases persons can be physically or psychologically affected for much longer after the actual treatment with the SMT. Also, data gathered locally can be used in other contexts and for other purposes as well. The impact of data gathering might thus gain larger scope, both in terms of physical spaces and in terms of context (work, leisure, home, health, religion), and in terms of temporal spaces, notably a person’s individual biography.
Intrusiveness measures the magnitude of the impact of an SMT. This impact concerns both the impact on the person in physical and psychological terms as the impact on the data double of this person.
The first aspect concerns the direct physical intrusion at or into the body of an individual, and can be scaled from intrusion of the body, being touched, undressed, being seen, … As part of the impact we also have to consider the psychological effects: feelings of being hurt, damaged, affected by the SMT. Such negative feelings can have longer lasting effects than the actual physical impact.
Another aspect is the impact on the data double, or more precise the data double created by the SMT. The more intimate or the more detailed the information gathered or used, the more intrusive the SMT is. This aspect of intrusiveness does not in itself depend on whether a person is aware of the specific impact.
Coerciveness describes the degree of compulsion associated with a particular measure/technology, or in other words how much agency an individual may exert over being monitored by a security measure/technology. It looks into the range of behavioral choices the technology makes possible, allows and makes impossible.
Distribution points to the fact that, notwithstanding that a technology functions independently of social factors, the impact of that technology can differ widely according to social roles or to social groups. Different categories of people can be affected by SMTs in different ways. Coerciveness, intrusiveness and scope of an SMT can be much larger for specific groups compared to the ‘average’ person. This may cause refined discrimination in the extent to which various freedoms, such as bodily integrity or privacy are infringed.

Typology of freedoms
Secondly, a typology of freedoms has been derived from human rights law, defining the main freedoms with which the SMT can interfere. These freedoms reflect the areas of human behavior considered as protected by human rights law.
These freedoms do not equal the human rights themselves. They are not legal norms or notions, and do not cover the same areas as the human rights norms from which they are derived. What is labeled as freedom is a simplified, proto-legal notion of what is protected by the human rights. These freedoms are ‘common sense’ concepts, referring to these human rights, but which are used without the specifications, qualifications and nuances of human rights law.
Further, what gets labeled here as freedom infringements are not necessarily violations of human rights but rather interferences with these protected rights or freedoms. An interference does not necessarily qualify as a violation of a legal right. If, however, an interference is an infringement that violates one of the codified human rights, they are prohibited, unless they can be justified on the basis of the applicable limitations. Mapping the impact of the SMTs on the freedoms does not equal mapping of violations of human rights, but gives the necessary facts needed to assess the proportionality of an infringement by a SMT and if a violation takes place or not.
For the assessment it is important to recognize when and how an SMT interferes with a freedom, also when such interference is not a violation in legal sense. In the assessment it is checked if such interference is necessary and how it can be minimized. The assessment is meant to make visible all options and to enable a proportionality check, not just to find out which options are legal but also to allow a choice between legally available options in order to minimize the nuisance. These 2 notions were further operationalized in a concept table of freedom infringements (See annex (7.2)

The questions in this table are not differentiated for specific SMT or categories of SMTs. In several workshops the normative impact of SMTs on the freedoms was further investigated. Aim was to relate the concepts in the freedom infringement table to practical use cases or technologies using the knowledge and experience of users and experts.
In WP4 focus-groups with legal experts and civil society organizations made an inventory of infringements and measures to reduce such infringements. In WP8 similar issues were investigated through the development of scenarios.
These methodologies also can be used when users of the SIAM AST want to derive more specific questions and issue related to a specific SMT, or to further explore which CIT can be used to diminish the impact on the freedoms.

a) Expert focus group
The workshops in WP4 had the primary aim of developing a catalogue of infringements, questions, and mitigation measures. The workshops are brainstorming activities, and as with any such activity participants should be allowed to generate as many responses as possible. The workshops can be conducted in essentially three steps in which participants will be asked to enumerate 1) infringements, 2) questions, and 3) mitigation measures. The aim of these steps is to generate a pool of issues, questions, and mitigation measures for each category.
Step 1: The purpose of this step is to brainstorm the range of potential infringements associated with each technology in typology. These could be breaches of legal rules or cultural norms and values. Partners should strive to enumerate as many issues as possible and be as specific as possible. For example:
Example 1: Surveillance cameras may be used to focus disproportionately upon ethnic minorities
Example 2: Body scanners reveal the body in a semi-nude state
Example 3: Pat-down searches may offensive for women if conduct by a man

Step 2: The purpose of this step is to translate the infringement issues outlined above into questions that should be asked during the acquisition and implementation phase of a technology. This should be a straightforward process that will bridge Step 1 with Step 3.
Example 1: How can we reduce the potential for surveillance cameras to discriminate against ethnic minorities?
Example 2: How can we reduce the privacy issues associated with body scanners?
Example 3: How can we avoid offending gender or religious sensibilities when conducting security pat-downs?

Step 3: Using the concepts of scope, coerciveness, intrusiveness and distribution, generate a list of technologies, rules, or procedures that could be implemented to mitigate the infringements identified in Step 1. As the issues in Step 1 were converted into questions in Step 2, Step 3 can be seen as efforts to answer those questions. It may make sense to focus on one or two of the infringement dimensions put partners should strive to generate counter-infringements measures for all dimensions.
Example 1: Privacy-enhancing algorithms may be used to blur faces or invert the video’s color spectrum, thereby reducing the intrusiveness of surveillance cameras. The scope of cameras can be restricted by limitations on the duration for which images are stored.
Example 2: The privacy issues associated with body scanners can be reduced primarily by focusing on scope and intrusiveness. Operators who conduct the scans should be limited to a small number of professionally-trained staff, they should not be within visual sight of individuals beings scanned, and images from scans should not be recorded and/or retained in any way. The coerciveness of body scanners can be minimized by offering an alternative security procedure such as a pat-down search.
Example 3: The higher intrusiveness of pat-downs for specific groups may be minimized if they are conducted by having them conducted by persons of the same gender and/or religious identity.

b) Scenario workshops
The workshops of WP8 approached these issues through the development of freedom scenarios. These freedom scenarios are narratives that will depict the types of freedom infringements that may be possible in relation to different kinds of SMTs, and they will illustrate how these infringements could unfold along the four freedom infringement dimensions and how they might be mitigated.
The workshop is conducted with 6-10 people, who bring a range of expertise into the discussion, including on civil rights. Diversity of the participants is important; e.g. academia, civil society, industry, SMT-expertise. The output of the workshop is to develop one ‘worst-case’ scenario and one ‘best-case’ scenario for each SMT or SMT type under consideration.
For every SMT selected there are two main focal issues:
• What are the most pressing infringement problems of the SMT? (worst-case scenarios)
• How can these infringement problems be mitigated /settled? (best-case scenarios)

1. Worst-case scenarios

For the purpose of the workshop, experts should discuss and agree on what the main issues are concerning freedom infringements of each SMT. To structure the workshop discussion the concept freedom infringement table can be used, as well as results from workshops as described above.
In the workshop experts are asked whether:
(i) they agree with the findings from earlier workshops if conducted and whether additional information can be provided. In addition participants are invited to discuss the normativity dimensions of each SMT, using the freedom infringement table.
(ii) they can prioritize the infringements. What infringements are the most pressing ones? We found that it might be helpful to ask experts to briefly summarize in laymen’s terms what the key infringement problem is regarding a particular SMT/SMT type. For some SMTs all 7 infringements might be felt to be a pressing problem, for others there are specific issues that seem to be of more concern than others.
(iii) to combine the infringement information into one coherent story that explains what the infringement problem of the particular SMT is. This results in a 2-page narrative for the worst-case scenario. The reader should be able to understand and grasp the infringement problems that the particular SMTs pose.

2. Best-case scenarios

After discussing the freedom infringement issues there should be a discussion about how these problems can be mitigated/settled. What are potential Counter Infringement Technologies (CIT)? CITs are applications or architectures of SMTs that rule out, inhibit or diminish freedom infringements. Such CITs are part of the broader set of Counter Infringement Measures (CIM), which broadens the search for mitigating measures by looking at the implementation and the context in which the SMT functions. CITs and CIMs can be discussed in the workshop in terms of the following sensitizing concepts: regulation and policy, planning, technology, awareness and transparency, accountability, redress, human factor - security officers, infrastructure, …
Last step is to combine the CIT information into one coherent story that explains how the infringements might be mitigated. This results in a 2 page narrative for the best-case scenario. The reader should be able to understand what the various ways are to counter infringements related to an SMT.
3.6 A method for the legal evaluation of security measures in public transportation
3.6.1 Introduction
This section of the report presents a method for the legal evaluation of security measures in public transportation. The basis for this method is the KORA method, from German ‘Konkretisierung rechtlicher Anforderungen’. Its creators are Roßnagel and the research group ‘provet’ (Projektgruppe verfassungsverträgliche Technikgestaltung – Project Group Constitutionally Compatible Technology Design). It was developed and used for the first time in the constitutionally compatible design of ISDN communication systems, i.e. in the field of information and communication technologies. In the following, it was used on multimedia documents, the purchasing of goods via internet, the handling of personal data in the context of individualisation, process management systems in public administration, internet voting and many other fields.
The following chapter will first present the KORA method. Afterwards, it will be demonstrated, how the method can be used for evaluation of security measures in public transportation.

3.6.2 Basic Principles of KORA
Development of new technologies usually takes place without taking into account legal aspects of the use of the final product, and instead focuses on functional efficiency and serviceability. Designing technologies is a process characterised by the selection of individual design choices. Throughout the process of technology genesis and development, decisions have to be made and their impacts, including legal impacts, have to be evaluated. The KORA method, as a rule-based approach for the normatively guided design of technology, supports decision makers in these decisions by helping them choose those design options that are best suited to fulfil legal requirements. KORA has been conceived to come into play during the design phase of technology development, after a technology has been defined beyond the early stages of conceptual development. This means that there already has to be some idea about composition and capabilities of a technology, i.e. ideally after an early prototype has been constructed. Its aim and effect is avoiding or at least the minimising the immanent risks of a technology. Risk in this context means any negative effect that a technology might have. The second aim is achieving or strengthening chances, meaning positive consequences.
KORA is based upon the most permanent legal norms, which – through their fundamental and technology neutral nature – provide a framework for future societal developments. In the Federal Republic of Germany such norms can be found in the Constitution or “Basic Law” (Grundgesetz). Subconstitutional law derived from it is not suitable as a starting point for the KORA method, as it can only be technology neutral to a certain degree. This means that due to the rapid progress of technology it antiquates quickly and thus cannot be used for the compilation of long-lasting guidelines. In addition to this, it is only concerned with a small part of the effects of technology usage. The life expectancy of such subconstitutional laws, especially those concerned with the use of technology, is therefore limited. The constitution however and especially its core, the fundamental rights and principles, is long-lasting and offers a much more future-proof solution. In addition to this, it serves as a guideline for the interpretation of subconstitutional law (rule of constitutionally compatible interpretation). This is true not only in Germany, but in any legal system based on a hierarchy of norms, as the constitutional norms are the consented objectives of a society. It is easy to agree that technology should be socially acceptable. The quarrel begins where it has to be decided what it means to be socially acceptable. But if the definition is based on constitutional norms that society has already agreed upon as its objectives, then this means that consented objectives for a technology design that avoids conflicts are already predetermined. This underlines the logic behind using the constitution as a basis.
However, the constitution does not contain statements that are directly applicable to technical systems. This means that the fundamental rights cannot be the immediate basis for the evaluation and the design of technology; they have to be concretised. This is where the previously described rules of interpretation come into play. One has to keep in mind though, that the aim is not to ascertain the legality of the technology, but its legal compatibility. Ascertaining the legality of a technology means nothing more than saying that the use of a technology would be legal or illegal. In that case, there would be only black and white, which means that this approach is too narrow. In contrast, legal compatibility is a broad approach which allows a grading: a technology can be more legally compatible or less legally compatible. It is thus a qualitative approach that allows for a differentiation within the concept of legality. This means that it is not identical with legality and not the opposite of illegality. Constitutional compatibility, as legal compatibility in relation to the constitution, means the compatibility of the social requirements and the impact of technological changes with the objectives of the constitution. The term is thus mostly synonymous with social compatibility, as social compatibility is defined as the compatibility with the objectives and standards of a society, whereas the law – and particularly the fundamental rights and principles – is the embodiment and formalisation of these objectives.

Figure 9 – The qualitative approach of the KORA method
By using the means of concretisation of constitutional norms, KORA faces the challenge of closing the description gap between broad and unspecific legal requirements – as found for instance in general clauses – and concrete design proposals, because such proposals cannot be found in abstract general clauses. To this end, the general clause, or in this case a basic right, is concretised over several steps. Thereby only the legally relevant part is covered, not the entire functionality of the technology or measure. This requires an interdisciplinary discourse between legal researchers and technicians: a dialogue of disciplines. This cooperation can be seen as one of the core elements of KORA. It is the only way to ensure that the expertise of both sides is channelled into the design process, because technology and law each have an individual terminology of concepts and are thus separated by a seemingly impenetrable language barrier. Overcoming this barrier between professions is both an integral part of KORA and one of its goals.
KORA must not be misunderstood as an automatism for the generation of technological solutions to legal problems. Rather, the outcome of its use can depend on the attitude of the user. This is due to the fact that different interpretations of legal norms exist. This effect can be minimised where the user follows the majority position when faced with a controversial question, especially the rulings of the higher courts, specifically the constitutional court (in Germany the Bundesverfassungsgericht as the guardian and supreme interpreter of the German constitution). This approach is further advocated by the fact that it strengthens the result of the examination. Still, the use of KORA will yield different but congeneric results, varying from user to user. This is a desired effect, because KORA does not strive to be an automatism, but a guideline that allows for different emphases. The structured composition of the method guarantees traceability. Thus, the results of its use are derived in a clear way and become a subject for discussion.
The use of KORA is composed of four steps. Starting point of its use are the relevant constitutional norms, which have to be identified and selected in a preliminary stage. What follows is a step by step concretisation of the fundamental legal provisions identified in the preliminary stage, at first into legal requirements, then in a second step into legal criteria, in the third step into technical objectives and finally into technical design proposals. The abstract legal requirements become more concrete with every step. Between the legal criteria and the technical objectives, there is a shift from the terminology of the law to the terminology of technology.
As an exception, subconstitutional law may under certain circumstances also be used as a basis for KORA, where it contains constitutional goals in the form of abstract general clauses. An example for this is § 3a of the German Data Protection Act (Bundesdatenschutzgesetz) which demands data reduction and data economy. This is a concretisation of the general right to the protection of personality via the right to informational self-determination.

The four-steps structure of the KORA method:

Figure 10 – Decision making using the KORA method
The individual steps and stages will be presented in the following chapter in more detail. Legal Requirements
The legal requirements are the result of the first step of concretisation and are derived from the fundamental legal provisions. They are the product of the legal interpretation of social functions that are affected by the technology being evaluated. This makes it necessary to establish a relation between the fundamental legal provisions and the social functions of the technology. The basis for this is a description of the chances and risks, created by the technology being evaluated, for the social functions behind the fundamental rights. Thus, the possible chances and risks have to be identified and examined in a side-step. All in all, the goal of the first step of the use of KORA is to create legal norms that have been specified for the technological environment. The legal requirements are expressed in legal terminology.
One example for the concretisation of a legal requirement from a fundamental legal provision is the right to informational self-determination, derived from Art. 2(1) of the German Basic Law in conjunction with Art. 1(1) in the so-called Census decision (Volkszählungsurteil) as a concretisation of the protection of personality to the risks of electronic data processing. Another possible fundamental provision is Art. 8(1) of the ECHR. Legal Criteria
Legal criteria stem from the concretisation of legal requirements. They are closer to technology than legal requirements, but still on the legal side and formulated in legal terminology. They describe solutions for the problems within the legal requirements, but without a limitation to a certain concrete technological, organisational or legal approach. All technical and non-technical possibilities for solutions still remain possible at this stage.
In this step, the right to informational self-determination could be concretised into the principle of transparency of any data collection and data handling. Technical Objectives
During the concretisation of legal criteria to technical objectives, the terminology used changes from legal terminology to technical terminology by converting legal terms into technical terms. Therefore, significant cooperation between legal researchers and technicians is necessary. The technical objectives are derived by looking for the most basic functions that the technology has to have in order to fulfil the demands set by the legal criteria. The technical objectives are thus too abstract to be implemented directly. They are nothing more than rough technical target specifications.
In this step, the principle of transparency could be further refined into the technical objective to document any kind of data collection and handling. Technical Design Proposals
The technical design proposals are derived from the technical objectives. They are a collection of measures for direct implementation into the technology. They are regularly not without alternatives; they should be seen as proposals, as their name indicates. This means that the catalogue of measures created in this last step can contain several alternative solutions for an individual problem. This is due to the fact that the aim of KORA is not to create a coherent system design. In fact this cannot be the case as KORA only looks at those aspects of a technology that are legally relevant. However, the proposals developed should be fit for direct implementation. This means that they have to be concrete enough that they could become part of a technical specifications sheet. Their implementation may not be strictly necessary from a legal point of view, but it should at least be desirable. This is due to the fact that the results of the use of the KORA method have been designed to fulfil fundamental legal provisions in the best way possible which means that they can be above the legally required minimum standard.
During the creation of a technology, the technicians, engineers, etc. involved can work towards the implementation of these measures. But it is also possible to utilise the results of the application of KORA to compare them with products readily available in the free market. This comparison will show any deficits the product might have; i.e. the degree of the product’s compatibility compliance with fundamental legal provisions.
The example that illustrated the individual steps of the KORA method culminates here with the proposal to implement the technical objective to document any kind of data collection and handling by including a display that shows the history of any data collection or data handling performed.

3.7 KORA as an Instrument for the Evaluation and the Design of SMTs
In 2008, the global market for security products and services exceeded for the first time the mark of 100 billion Euros and has grown by about five to seven per cent every year since. 30 per cent fall upon the European market. This is an indicator for the high expenses in this sector. An investment into a certain SMT is a long-term investment. No end user can afford misinvestments due to high acquisition and follow-up costs. It is thus very important for any decision maker to choose a security product that can be used in his own legal system without coming into conflict with the law. Furthermore, the SMT must be socially accepted; it must not deter potential passengers from travelling. Here legal evaluation and social evaluation gear into each other: As shown above, constitutional norms are expressions of generally accepted social standards and norms. It is thus beneficial in more than one way to adhere to these constitutional norms when performing an evaluation. The concept of legal compatibility takes this up and tries to achieve a maximum of conformity with fundamental legal provisions using a qualitative approach, instead of just adhering to minimum standards. This leads to a broader social acceptance of a measure.
Finding such a product can be challenging due to the fact that the market for security products is international. This means that it can be difficult to find a product that is compatible with the legal situation in the end user’s country. This is where KORA comes into play as a method for legal evaluation and legally compatible design. Not just to end users should KORA be of some interest, but also to manufacturers in the security sector that want to benefit from the continuous boom, to enable them to develop security products that are legally compatible and can thus survive in the marketplace and prevail in the critical eyes of the public.
Parameters like those that can be found in Part A of the Annex to Commission Regulation (EC) No 272/2009 in the form of a catalogue containing acceptable methods for the screening of passengers, luggage and freight in civil aviation do not offer much assistance when selecting a concrete product. Such parameters offer nothing more than a list of methods that are acceptable in principle, but they do not offer indications on how a method has to be shaped precisely, both in a technological and an organisational sense. Furthermore, the fact that a measure is accepted on a European level does not mean that it is compatible with the constitutional framework of one of the member states. Unconstitutional SMTs however cannot and must not be authorised and operated.
The following chapters will demonstrate how KORA can be used for the legal evaluation of SMTs, thus exceeding its original purpose which was the legally compatible design of information technologies. To achieve this, KORA has been adapted to the characteristics of this goal. At this point, it has to be emphasised once again that the aim of the method is not to attach a seal of approval to a certain product that merely indicates conformity with legal minimum requirements; similar to what the CE logo or the ECB-S certificate stand for in the field of product specific conformity. Instead the aim is a qualitative evaluation of the legal compatibility of a measure.
These guidelines will enable a fundamental legal evaluation of existing and future SMTs. In addition to this, KORA can and should be used for the legally compatible design of SMTs. The following paragraphs will give a detailed description of the individual steps a user has to follow to perform a legal evaluation of an SMT.
3.7.1 Pre-stage – Identifying the relevant fundamental legal provisions
First, in a pre-stage, the relevant fundamental legal provisions as the basis for the evaluation have to be identified. In Germany, the catalogue of fundamental rights found in the Basic Law is primarily relevant. At the European level, the Charter of Fundamental Rights of the European Union can form the basis. To be able to reduce such a catalogue of rights to those that are actually relevant for the evaluation, a preliminary evaluation is necessary.
3.7.2 Type and Functions of the SMT
The user of the method will at this point already have decided which type of SMT he or she wants to evaluate. This means that the start of the procedure is the decision in favour of a certain measure, for instance a system for biometric access control or video surveillance. To make this decision in a professional way, the user has to possess basic technological knowledge, as well as knowledge in the fields of security and counter-terrorism. Here the scenarios and the scenario building tool developed in work package 6 of the SIAM project can be helpful, as they give indications for the necessity and suitability of a measure. The second pillar of decision making in this context are the technological and social features of an SMT. Here, the results of work packages 2, 4 and 5 of the SIAM project can lend some assistance to the user. At the end of this step, type and functions of the SMT that is to be evaluated will have been identified.
3.7.3 Fundamental legal provisions
After the basic functions of a measure have been isolated and carved out, the fundamental legal provisions can be identified. For this, it is necessary for the user to possess legal knowledge. A fundamental right is relevant, if its protected sphere is affected by the measure being evaluated. Furthermore, a fundamental right can become relevant where it is facilitated by the measure. To determine this, the chances and risks of the use of the SMT have to be examined. They are derived from the functions identified in the previous step. This is in line with the target to extract legal requirements from social principles that are the basis for legal norms. Depending on type and functions of an SMT, different fundamental rights will be affected.
It has to be kept in mind that the goals stated in fundamental rights do not just stand side by side, but that they often come in conflict with each other, meaning there are conflicts of goals. Such conflicts can occur in every stage of the KORA method. They should not be solved if possible and carried on as far as possible in order not to lose alternative solutions that may result from these conflicts of goals.
The carved out functions and the fundamental legal provisions should be linked in a table in order to increase clarity and traceability of the process:
LP #1 LP #2 LP #3 …
F #1   
F #2 -  -
F #3  - 

 shows, that a function (F) affects the range of protection of a fundamental legal provision (LP)
Figure 11 – Example of a diagram of the functions of an SMT and the affected fundamental rights
Stage 1 – Deduction of Legal Requirements
What follows is the first step of concretisation in which the fundamental legal provisions are condensed and channelled into legal requirements. Where such concretisations already exist, for example in the shape of a ruling of the constitutional court, they can be resorted to. In any other case, the conventional methods of legal interpretation should be used.
It has to be noted that the principle of proportionality can neither serve as a fundamental legal provision nor as a requirement; rather it is an implicit part of the KORA method. This results from the fact that the question of proportionality of a measure – and thus of the material lawfulness – is an aspect of legal compatibility, which aims at a gradation of proportionality. Proportionality is thus not located on the level of legal requirements, but instead it is an overarching concept that lances the evaluation as a whole and which is ultimately absorbed by the concept of legal compatibility.
Stage 2 – Concretization into Legal Criteria
The legal requirements are now concretised into legal criteria by deriving from the legal requirements the basic requisites concerning the use of the SMT. In order to do this, rules have to be identified which determine how to fulfil the legal requirements with regard to the specific features, risks and conditions of the use of the SMT. The criteria thus derived are both connected to the technology as well as to the social and legal aspects. They are the bridge between the law and technology and herald the change in terminology from the legal terminology to the terminology of technology.
Stage 3 – Concretization into Technical Objectives
In the third stage, technical objectives are derived from the legal criteria. Since they can also contain organisational objectives that do not pertain to the concrete design of a technology, but rather to the environment and manner of its use, they could also more accurately be called technical and organisational objectives. The technical objectives are abstractions of concrete technological features. The concretisation from legal criteria is based on considerations about how to transform these legal criteria into basic functions of an SMT. The objectives thus developed are descriptions of functions and requirements in general terms. In this stage, alternative proposals can be developed to have a broader base for the comparison following in the final stage. Such alternative proposals can also facilitate a comparison between several SMTs that try to give different solutions to legal requirements.
If KORA is used in the context of the genesis and design of technology, technical (and organisational) objectives which are not concerned with technology design but rather with the use of technology must not be omitted; they remain relevant. Because as early as during the design process it has to be made sure that technology is designed in a way that does not hinder or preclude certain legally compatible organisational options. Quite the contrary, the producer should work towards promoting certain organisational options which benefit basic rights. To that end, it is imperative that producers concern themselves with organisational aspects and possibilities of the later use on the level of technical objectives and account for them in the development process. Basic rights would benefit even more, if producers were to pass recommendations for the implementation of their products and its organisational environment on to the buyers and users. In order to realise this, it is again necessary for producers to concern themselves actively with these aspects.
If the methodology is used in the context of the acquisition of an SMT by a decision maker, still the organisational aspects of the use of the SMT should be kept in mind, but only those technical objectives are directly relevant for the decision which SMT to buy that concern the technological features of the SMT. Still, it is important that – by compiling them together with technological aspects – organisational aspects are on hand as early as during the acquisition phase for the comparison with SMTs, because they may for example indicate additional costs or spatial requirements and can thus be relevant for the decision. During the implementation and arrangement phase of the chosen SMT, the organisational aspects can finally have full effect.
Stage 4 - Comparison
Where the user evaluates a concrete product (or several), the use of the method is concluded with a comparison of this (or these) product(s) with the technical objectives developed in the previous stage. If the user evaluates more than one SMT, he or she is advised to draft a table containing an overview as shown in Fig. 6. Alternatively, the technical objectives can be used as a checklist for the selection of a suitable SMT. It has to be kept in mind that it is possible for an SMT to only partially comply with a technical objective. Also, when comparing several SMTs, it can occur that a number of candidates are equally compatible with technical objectives. In such a case the user should fall back on non-legal factors to decide between these candidates.
SM #1 SM #2 SM #3 …
TO #1   
TO #2   
TO #3   

A security measure (SM) fulfils a technical objective (TO) completely (), partially () or not at all ().
Figure 12 – Example of a diagram when comparing SMTs
In this last stage, the guidelines deviate from the original application of the KORA method. Concretising the technical objectives into technical design proposals, which would have to be adhered to during the development of a technology, is substituted for the above mentioned comparison. This approach, however, is not new. Hammer/Pordesch/Roßnagel for instance have already described the possibility of making a comparison with available products.
In summary, the structure is as follows:

Figure 13 – KORA as a method for the legal evaluation of SMTs
Using a method such as the one proposed can entail significant benefits. It will increase social acceptability of a security measure and it will increase the durability of the decision made, since it is based on the most durable legal norms and principles. Furthermore, the decision becomes traceable. Also, the method facilitates rational decision-making, as it forces a decision-maker to place his or her decision on a solid foundation, namely fundamental rights and principles, and it makes it necessary that the decision-maker concerns him- or herself with the effects of the measure on the rights and principles. Overall, the attractiveness of the mode of transportation or the transportation site, where the method is used, is increased. Passengers will be less likely to opt for another mode of transportation or for example to choose a different airport for their travels, if they have confidence not only that a state of security is maintained, but also that security is maintained while keeping in mind the effects of security measures on passengers and minimising any negative effects.

Potential Impact:
Impact assessments, as means to address societal impact of research and development in the security domain, are increasingly being perceived as approaches that contribute, rather than hinder, innovation. It needs to be emphasizes, however, that an overly instrumental understanding of impact assessment as increasing the return on investment and increasing acceptance is inadequate. Rather, as it is common sense in the social impact assessment literature and as SIAM and other research projects have repeatedly shown , impact assessments have the potential to contribute to innovation by ways of reframing research and innovation activities in the security domain along the whole innovation process. The assessment support tool that has been developed by SIAM can be expected to have its greatest impact in the final two phases of the innovation journey process: development and testing and wider change. In the development and testing phase, security products are made fit for markets and operational use. In the wider change phase, security products are being implemented and used in operational contexts. In both phases, decision-making requires the consideration of a large number of divergent perspective and interests. SIAM’s direct impact in light of this challenge is the provision of a tool and of a number of guidelines structuring and supporting the assessment of societal dimensions of security measures and technologies.
The assessment support tool allows to plan, conduct, document, and review assessment procedures that include divergent societal dimensions and a large number of stakeholders. It entails a database filled with assessment criteria and questions that can be edited to fit different institutional contexts and technological specifications. The guidelines provide for instructions and methodologies for how to collect and analyze information about societal impact. More specific, they entail methodologies for threat assessments, security impact assessments, freedom infringement assessment, and legal evaluation. If followed through, these methodologies allow for a sound understanding of security measures and technologies can or cannot address risk, increase security, avoid or limit freedom infringements and adhere to legal frameworks. More often than not, this has the potential to reframe the grounds for investment decisions and stimulates innovative thinking and approaches. Specifically, this toolkit is innovative in the sense that the underlying understanding of impact goes beyond ethics, economics and legal compliance by considering the impact of security measures and technologies on customers, employees, citizens in general, and on organizations. Furthermore, it is not limited towards building scenarios but emphasizes the importance of evidence gathering. And finally, the SIAM assessment support tool promotes genuine participation in assessment procedures, especially through a consultation functionality that is built into the tool.
The SIAM assessment support tool has been designed to help end-user organisations to better cope with the complexity of societal impact assessments. The socio-economic impact of this is the facilitation of more comprehensive assessment procedures to support decision-making in end-user organizations of security measures and technologies. This can help to find better solutions for both investors and customers, prevent a loss of investments and to protect fundamental rights. Currently, the SIAM assessment support tool is being tested by one of the case study partners, and a software engineering institute at TUB is looking into ways to further extent and improve the tool with a view towards developing a business case.
The direct academic impact of SIAM is a contribution towards the growing debate about responsible research and innovation, and impact assessments in the security domain. SIAM delivered a key publication in this regard in the journal for Science and Public Policy and hosted three panels on impact assessments on the recent CPDP conference. With these activities, an effort was made to emphasize that ultimately, impact assessments are political exercises that tap into fundamental debates about freedom, security, innovation and profit. The tendency to understand security as a field where profit can be made through an increase of technologies beats not only the danger to develop ever more cases that bear the potential to violate the fundamental rights of those who become scrutinized by the new technologies. It also bears the risk to promote the inadequate perception that technologies can provide fixes for what are essentially societal problems. It is here where impact assessments can have the potential to point out the limitations of security thinking, provoke the need to envisage alternatives and in cases where security solutions are thought to be essential to make them better both in terms of effectiveness and in terms of the respect for fundamental rights.
The direct impact on policy debates that can be drawn from SIAM is that self-regulatory impact assessments cannot stand alone but that coherent normative guidance is needed from the policy debate about what is deemed acceptable and desirable and what is not. Once that technologies are available that have the potential to create an unintended societal impact, it is difficult to ban or limit their use or to regulate their use in a way that limits the unintended consequences. This highlights the need to not only consider impact assessments during the final two phases of the innovation journey, but to extend the concept towards the whole innovation chain.

The contact details of the project coordinator are:
Dr. Leon Hempel
Zentrum Technik und Gesellschaft
Technische Universität Berlin
Hardenbergstr. 16-18
10623 Berlin
Phone: +49 (0)39 314 25373