Final Report Summary - LEILA (Law Enforcement Intelligence Learning Application)
The LEILA project aims to provide law enforcement organizations with an innovative learning methodology through the development of engaging learning experiences and gaming solutions designed specifically for the learning needs of the civil security intelligence analysis community.
The improvement of intelligence analysis implies the adoption of a holistic view enabling to address these various issues. To reach this ambitious goal the project has been designed to reach the following research outcomes:
•Analyze and describe the specific skills and competences of the intelligence analysts.
•Explore characteristics and abilities; identify learning needs, traps and biases and areas of improvement.
•Design an innovative methodology and a set of learning experiences to address the specific needs of intelligence analysts
•Provide a set of serious games to enable trainees to acquire the skills and competencies requested by their role.
Specifically, the LEILA learning framework addresses the enhancement on knowledge and skills identified as crucial for Intelligence Analysts (IA), such as: awareness of cognitive biases, practice of critical thinking, filtering and analyzing massive amount of data, capability to draw relevant conclusions and take appropriate decisions, decision making under social and time pressure, creative intelligence, collaboration capabilities and team-based decision making, reporting and communication skills.
The LEILA project introduces radical innovations at two levels: I. the conceptual foundation supporting the project's development, II. the learning outcomes stemming from the use of the games for LEA intelligence analysis training.
As far as conceptual foundations are concerned, the innovations are twofold:
a. an in depth exploration of issues related to a variety of research fields, such as experimental psychology, Bayesian approaches for dealing with uncertainties, models of formal logic and reasoning, preference elicitation, game theoretic models for decision making.
b. a synergetic network between the various fields above mentioned, and that enables and set up a consistent and efficient conceptual framework for the design of LEILA serious games.
LEILA provides a fully featured training package that addresses all the training needs in terms of subject matter, learning context variables (e.g. biases, barriers and enablers), deployment and supporting material:
- by developing new serious games that addresses the cognitive biases (e.g. (1) Confirmation Bias, (2) Fundamental Attribution Error, (3) Bias Blind Spot, (4) Anchoring Bias, (5) Representativeness Bias, and (6) Projection Bias) and gaps (e.g. critical thinking) that may affect intelligence agents operative performance;
- by providing variable learning contexts via the use of existing assets such as serious games on crisis management, collaborative leadership and other application fields to address all the competence development enablers of intelligence analysis, such as attention, stress and time management, collaboration and leadership under highly stressful conditions, proactivity and reactivity, prioritization, avoiding decision-making traps etc. in order to make intelligence analysts perform at their best;
- by deploying a set of facilitated learning experiences that bring serious games sessions in the framework of workshops addressed to intelligent analysts, to their instructors and to whom is responsible for their training, and which will contribute to consolidate learners’ achievements and set the basis for training curricula;
- by providing supporting material through the deployment package that includes guidelines for serious games set up and running as well as teaching notes for facilitation, briefing and debriefing in workshops.
Project Context and Objectives:
The LEILA project aims to provide law enforcement organizations with an innovative learning methodology through the development of engaging learning experiences and gaming solutions designed specifically for the learning needs of the civil security intelligence analysis community.
The improvement of intelligence analysis implies the adoption of a holistic view enabling to address these various issues. To reach this ambitious goal the project has been designed to reach the following research outcomes:
•Analyze and describe the specific skills and competences of the intelligence analysts.
•Explore characteristics and abilities; identify learning needs, traps and biases and areas of improvement.
•Design an innovative methodology and a set of learning experiences to address the specific needs of intelligence analysts
•Provide a set of serious games to enable trainees to acquire the skills and competencies requested by their role.
Specifically, the LEILA learning framework addresses the enhancement on knowledge and skills identified as crucial for Intelligence Analysts (IA), such as: awareness of cognitive biases, practice of critical thinking, filtering and analyzing massive amount of data, capability to draw relevant conclusions and take appropriate decisions, decision making under social and time pressure, creative intelligence, collaboration capabilities and team-based decision making, reporting and communication skills.
The LEILA project introduces radical innovations at two levels: I. the conceptual foundation supporting the project's development, II. the learning outcomes stemming from the use of the games for LEA intelligence analysis training.
As far as conceptual foundations are concerned, the innovations are twofold:
a. an in depth exploration of issues related to a variety of research fields, such as experimental psychology, Bayesian approaches for dealing with uncertainties, models of formal logic and reasoning, preference elicitation, game theoretic models for decision making.
b. a synergetic network between the various fields above mentioned, and that enables and set up a consistent and efficient conceptual framework for the design of LEILA serious games.
LEILA provides a fully featured training package that addresses all the training needs in terms of subject matter, learning context variables (e.g. biases, barriers and enablers), deployment and supporting material:
- by developing new serious games that addresses the cognitive biases (e.g. (1) Confirmation Bias, (2) Fundamental Attribution Error, (3) Bias Blind Spot, (4) Anchoring Bias, (5) Representativeness Bias, and (6) Projection Bias) and gaps (e.g. critical thinking) that may affect intelligence agents operative performance;
- by providing variable learning contexts via the use of existing assets such as serious games on crisis management, collaborative leadership and other application fields to address all the competence development enablers of intelligence analysis, such as attention, stress and time management, collaboration and leadership under highly stressful conditions, proactivity and reactivity, prioritization, avoiding decision-making traps etc. in order to make intelligence analysts perform at their best;
- by deploying a set of facilitated learning experiences that bring serious games sessions in the framework of workshops addressed to intelligent analysts, to their instructors and to whom is responsible for their training, and which will contribute to consolidate learners’ achievements and set the basis for training curricula;
- by providing supporting material through the deployment package that includes guidelines for serious games set up and running as well as teaching notes for facilitation, briefing and debriefing in workshops.
Project Results:
Main S&T results/foregrounds
User requirements and learning needs specification (WP2)
Main activites
One of the initial steps taken by the LEILA consortium was to investigate and to understand the psychological factors and cognitive processes relevant for intelligence analysis community, the current training approaches, gaps and areas of improvement, as well as the learning needs and the user requirements for specific education and training, using serious games platforms. Such an assessment was conducted by means of an extensive review of the available literature as well as through a series of surveys and workshops organised by the consortium with end-users. Through these, has been achieved the premises for defining the cognitive and decision biases in IA, which constitute the input for the conceptual foundations of the LEILA learning methodology. Our research efforts started with the understanding of the contemporary Law Inforcement Intelligence Community and its set of coordinates that draw its existence in today’s society. By means of an extensive review of the available literature as well as through a series of surveys of the fundamentally quantitative and qualitative intelligence analysis from Defence, Policy and National Security organizations, our research emphasizes the correlation between the essential purposes of the law enforcement intelligence function (prevention and protection) and the different types of intelligence (strategic, operational and tactical intelligence). We also highlighted the main challenges for the agencies of the law enforcement community: the ability to develop a culture of information sharing and a culture of awareness.
Figure 1: The fundamental difference between prevention and protection
In defining the psychological factors and the relevant cognitive processes for specific intelligence analysis we started from the main concepts that polarize arguments in the development of intelligence analysts nowadays: either almost anyone can be trained in this field versus only specific individuals with natural qualities and traits. In our view, both strong character and cognition traits are what great intelligence professionals are made of. A team of specialists in cognitive psychology evaluated different types of intelligence analysis (i.e. descriptive, explanatory, interpretive and estimative) as a starting point for answering the fundamental questions about what level of mental and cognitive processes quality the intelligence professional is requiring in order to be successful in performing specific analytical tasks. Based on the resulting ratings, a cluster analysis was performed, and the relevant characteristics, knowledge, skills and abilities, required by the IA in order to provide accurate estimations and predictions, were selected. An important challenge for our research was to establish a connection between these relevant factors and the problems facing the intelligence analyst.
Using experts in the fields of cognitive and experimental psychology, we examined the extent to which analysts possess an accurate understanding of their own mental processes. Many functions associated with perception, memory, and information processing are conducted prior to and independently of any conscious direction. The intelligence analysts do not approach their tasks with empty minds. The analysts’ understanding of events is greatly influenced by the mind-set or mental model through which they perceive the events. Perception is a factor of particular importance for Intelligence Analysis. If analysts have good insight into their own mental model, they should be able to identify and describe the variables they have considered most important in making judgments. The connection between perception and the problems facing the intelligence analysts. The circumstances under which accurate perception is most difficult are exactly the circumstances under which intelligence analysis is generally conducted dealing with highly ambiguous situations on the basis of information that is processed incrementally under pressure for early judgment. The intelligence analyst’s own preconceptions are likely to exert a greater impact on the analytical product. Once this step was completed, the consortium proposed to introduce a new thinking skill namely the thinking disposition. In our opinion, the analyst must be willing to make the extra effort to think creatively and critically and consequently, the thinking disposition acts as a trigger for the superior cognitive activity.
Figure 2: The LEILA Psychological Insight Approach
We concluded that the categorization of psychological factors and cognitive processes, through the development of an ontology framework, will facilitate the work of intelligence analysts, by proving clarifications that help the analysts to be more aware of blocking points and interpretation mistakes.
At the same time, the LEILA consortium analyzed the current intelligence training approaches at the individual (ILN), group (GLN) and organizational (OLN) levels. Our research efforts have started from the premises that the education and continuous learning process are crucial for the intelligence analysts in order to provide the required level of excellence required by this type of activity. On a theoretical level, an important finding was the difficulty of identifying, within the intelligence community, a common understanding of the intelligence management, training and tools for augmenting the overall analysis process, the analysts’ competencies and analysis of intelligence errors and failures. This conclusion is associated with the fact that the various intelligence organizations tend to structure their work in a way that strongly depends on their specific preferences, on the nature of the challenges they face regularly as well as on the context in which they operate. Moreover, too often they are expecting to use the modern technology to translate automatically imported data into meaningful and actionable information.
Figure 3: Game-Based Approaches in IA Training
Once this step was completed, we performed an in-depth assessment of every phase of the Intelligence Analysis Cycle, which is broadly recognized as the foundation of the intelligence analysis process. Consequently, we proposed an Intelligence Analysis Cycle cybernetic model that includes seven phases (Direction/Tasking; Collection; Evaluation; Collation; Analysis; Inference development; Dissemination) with an evaluating loop over the all phases, allowing the return at the suited stage, so as to short the reaction time. For each phase of this cycle we described the main activities, the knowledge, abilities, skills and characteristics necessary to be captured and translated into the game platform / software application. Also, starting from user requirements, we highlighted the emerged cognitive biases to be addressed in the LEILA learning experiences. These were used as main input for WP 4 in order to define the Intelligence Analysts’ Competence Development Enablers.
Figure 4: The criminal intelligence cycle
Through a continuous interaction with end users, our research has been focused on the analysis of the contemporary curricula, resources, tools, tactics, techniques and procedures currently used for the training in IA environment. After presenting a comparative analysis of teaching methods and specific techniques of analysis, according to user learning requirements and international trends, apart from creative and critical thinking we proposed to introduce a new thinking skill namely the thinking disposition. In our opinion, the analyst must be willing to make the extra effort to think creatively and critically and consequently, the thinking disposition acts as a trigger for the superior cognitive activity. LEILA consortium team research continued with the focus on game-based approaches in IA training. In this respect we have analyzed the major outcomes of European research projects in the field as well as the main existing IT tools and serious games developed in or for the IA environment. The main challenge that we faced during this task was to argue that serious games is a promising approach to cognitive biases mitigation. Analyzing the existing games-based approaches addressing cognitive biases, we decided that, in the context of the LEILA project, we are able to design learning experiences and games-based scenarios in order to increase the IA awareness, help the IA to recognize the manifestation of a bias, provide the IA with a range of de-biasing techniques, as well as with strategies to compensate the errors resulting from biases. Finally, we developed a diagnosis of training needs for the individual (ILN), group (GLN) and organizational (OLN) levels, and we identified the gaps & areas for improvement (Group Processes in Intelligence Analysis, Social Categorization and Intergroup Dynamics).
Figure 5: The diagnosis of training needs
Based on the finally category of technological solutions which can be used by analysts to support the phase of intelligence analysis, the LEILA consortium prepared an operational report which includes a set of user requirements and learning needs for intelligence analysts training improvement using serious games platforms. To better illustrate it, an electronic user requirements survey was conducted with representative end users from NDU, the Department of Intelligence from ROU Ministry of National Defense and the ROU Border Police. Starting from the current developments in the field of serious games and the technology used for training the intelligence analysts, we have addressed the specific attributes that represent unique requirements for the IA. Through a continuous interaction with end users and based on several recent cases, we mitigated the negative consequences and integrated the lessons learned resulting from analysis mistakes. From this perspective, the main challenge faced by the LEILA consortium was the large number of reasons for which the intelligence analysis can be proven as inadequate. We assume that a key reason is that intelligence-gathering and intelligence-analysis are erroneously considered as separate disciplines, determining a time-lag in the intelligence analyst’s response, as they avoid raising any alarms before it is clear that there is a dire situation.
The knowledge driven from learning experiences was correlated with previous outcomes through a complex competencies matrix. In this matrix we merged the identified user requirements and learning needs [for experts (analysts), learners (students), and trainers/instructors] with main functionalities of ICT tools to be developed within the LEILA project framework.
Figure 6: Set of user requirements and needs for intelligence analysis ICT tools
Main scientific results
The LEILA project has been a great opportunity to share thoughts and experiences with dedicated specialists and experts, to set a collaborative framework for the integration of innovative learning technologies into the training of intelligence analysts, with the potential to bring added value to the entire LEA, defense, public order and national security education system. LEILA approach and methodology has made possible the transition from a classical, theoretical predominant approach to intelligence analysts training to a serious games solution which enables the use of ITC technologies, in line with the trends registered at EU and NATO level.
Thus, it has been a great opportunity to improve our educational curricula through the integration of the LEILA learning experiences into the university set of learning tools and methodologies. The synchronization between the intelligence analysis cycle and the military decision making process has been improved. As such, the common database of LEILA platform was used for the generation of valuable intelligence products and military estimates, the synchronization of effects along all the operational domains and subsequently for the evaluation of real-time events, in various contexts and scenarios, with the purpose to stimulate and develop the students’ ability and capacity for analysis and decision making. Moreover, LEILA had potential to generate the starting point for the integration of educational programmes pertaining to the area of defense, public order and national security, into a common conceptual framework for training, based on collaborative and parallel work.
Through its operational objectives the LEILA project had a direct contribution to the development of an academic culture of intelligence. Its approach established a new framework for the intelligence analysis paradigm, risk management, early warning, situational awareness and crises management at national (e.g. MoD) and international institutions (e.g. NATO, EU) level, in the fields of security, defense and counter terrorism. The variety of learning experiences, the modeling and simulation of specific scenarios, the operation in an environment characterized by volatility, uncertainty, complexity and ambiguity (VUCA approach) emphasized the importance of computerized tools in real time decision making processes. Such tools are supposed to share information, intelligence and assessments in a dynamic collaborative environment based on common platforms (e.g. LabRint software application) at the institutional level and enhance strategic partnerships. Knowledge has value, but intelligence has power, thanks to artificial intelligence and expert systems that contribute to refining and enriching knowledge databases through successive iterations of the analysis processes. The collaborative networks enable the expansion of the multilateral and multidisciplinary cooperation, the target-oriented approach, requesting integrated expert’s teams, as a support for a systematic collection and analysis of data and information for the development of relevant intelligence products.
Some of the knowledge LEILA team has gained during the LEILA project timeframe has been disseminated in several academic events and scientific conferences through the publication of 6 scientific articles, namely: Predictive Analysis in Intelligence Analysis; Intelligence analysts’ professionals training through serious games solutions; Lessons Learned from mistakes in intelligence analysis; Cybersecurity by minimizing attack surfaces; Learning technology in support of intelligence analysis – challenges and lessons identified; Trends and challenges in intelligence education and training. Also the LEILA team has published a book (Pillars and Centers of Gravity for Intelligence Analysis).
This book is the result of the LEILA project team findings, obtained through a challenging scientific research and team work effort, conducted and implemented in multidisciplinary, multinational and multicultural teams, which aims, as a primary target audience the Intelligence Community, trainers and students pertaining not only to law enforcement organizations, military and special services but also to business intelligence.
Conceptual foundations of LEILA (WP3)
Main activities
LEILA’s scientific research is for a significant part, based on the observation that, beyond the mastering of digital technologies:
• building serious games for training intelligence analysts, requires a synergy between various conceptual foundations, providing the capacity to collect data, translate them into appropriate information, and enable intelligence analysis beneficiaries to take appropriate decisions
• increasing the efficiency of automated systems dedicated to information analysis, may imply exploring developments of artificial intelligence in other directions than the ones that have been followed till now, while establishing connections with the latter
1) Decision and cognitive biases
At both levels of information analysis and decision-making, the experience accumulated has shown a wide spread existence among others, of two categories of biases: decision biases on the one hand and cognitive biases on the other. These categories may sometime interfere, for instance when the decision to be taken deals with the selection of a particular hypothesis on the basis of data available, making thus the analysis and the conclusions to be drawn from that analysis more confusing. Whence, the method proposed by LEILA: first to proceed separately to an analysis of these two categories of biases; then consider the possible consequences of some of their main interactions. To that end, relying on an abundant literature addressing the two topics as well as on the users requirements determined in WP2, LEILA sets up a list of biases concerning more specifically intelligence analysis, with some possible applications to other fields like economic intelligence, strategy or organization. In particular, LEILA insists on the fact that each category of biases has to be considered from two different perspectives:
• the “perspective of oneself” in which the intelligence analyst uses the results of the research to become aware of his / her own biases (when such biases exist) and hence correct his / her personal interpretation of his / her environment;
• the “perspective of the other” in which, he / she uses the results of the research to better understand the behavior of the individuals or groups that are under scrutiny.
So four different cases need a priori to be considered, depending on what are, in terms of the presence of biases, the states of the parties mentioned here above.
As the intelligence analyst doesn’t know a priori which case corresponds to the present situation, he / she has first to analyze his / her own situation with respect to possible biases.
Now, among the main subjects pertaining to decision biases, is of course the issue of rationality, which has been the subject of a vast literature leading to many different conclusions. On the basis of several examples, some coming from the classical literature, others developed in the project, LEILA points out that:
• sometime behaviors might be considered as irrational, while in reality they are rational with respect to the decision-maker’s moral and / or cultural values of
• different types of rationality may be considered, possibly related to different types of psychology
• nevertheless experiments of different natures have shown that in everyday life, irrational behaviors occur quite often, and which source is the influence exerted by some specific environments
• in that respect, motivations can play the role of filtering tools, for selecting a particular behavior
LEILA also proposes a list of competence development enablers, both at the individual and team levels. Then the project matches conceptual foundations with users’ requirements as expressed by Law Enforcement Agencies (LEAs).
2) Inference schemes
Considering that beside decision and cognitive biases, other factors may also affect decisions, like incomplete or imperfect information, as well as the mode of reasoning (deductive, inductive, etc.), LEILA draws lessons from business management and finance, in terms of data mining and predictive algorithms, developed in particular through Bayesian Networks currently used to develop inference schemes.
Now, beside Bayesian Networks, two other main standard approaches used to develop such schemes are analyzed: Dialog Games on the one hand and Case Based Reasoning on the other. These approaches are then compared with the current practices of LEAs in terms of inference schemes development. Focusing on probabilistic graphical models, LEILA proposes an alternative approach based on a particular category of qualitative matrix games (in the sense of Game Theory), called Games of Deterrence, and which have already been applied to structure argumentation through building a one-to–one association between these games and graphical models of Games formed by bipartite graphs called Graphs of Deterrence.
More precisely, in a first stage corresponding to binary logic, given two arguments A and B, there will be an edge of origin A and extremity B, if and only if A true implies that B is false. Applying this rule to the data sets collected by intelligence analysts enables to build the corresponding inference scheme, which in turn, through the one-to-one association between graphs and games, enables, by solving the games to draw conclusions about the truthfulness or falsity of the various hypotheses that might a priori emerge from the dataset.
In this respect LEILA, is on line with the recommendations made by Richard Heuer in his book entitled “Psychology for Intelligence Analysis” in which he recommends to disprove rather than to prove. Another core advantage of Matrix Games of Deterrence, is that, on the opposite of what happens with some other graphical models, finding the solutions of these games doesn’t require to deploy a process which complexity increases exponentially with the number of nodes.
3) Preference elicitation
As already seen, the relations possibly existing between the elements of the data set on which the intelligence analysts have to work, may also depend on the values of the individuals or groups under scrutiny, as possibly translated in terms of their preferences.
For instance, in terms of the bounded rationality approach used in Games of Deterrence, and according to which the outcomes of interactions between the players may be either acceptable or unacceptable, it is important for the intelligence analyst to determine which states are acceptable and which are not for the individuals or groups under scrutiny. If such is not the case the intelligence analyst may find himself / herself in a situation of incomplete information, which makes it significantly more difficult to draw appropriate conclusions.
Therefore, whatever the characteristics of the information available, or the reasoning mode, it might be of core importance to elicit the preferences of the parties under scrutiny. To that end, different approaches may be envisaged. After describing the Rubinstein & Salant preference elicitation model, LEILA, in consistency with the inference schemes analysis developed, proposes a preference elicitation model using a particular multi-criteria decision-making (MCDM) algorithm based on non-fuzzy matrix Games of Deterrence (defined as. Games of Deterrence in which each strategy playability is represented by a binary number), and thus enabling to use the same engine than the one used for inference schemes determination.
Moreover, just like for decision or cognitive biases, there are two perspectives for preference elicitation: indeed the use of the MCDM algorithm proposed by LEILA can determine:
• the “preferences of the other”, the latter being an individual or a group under scrutiny
• the “preferences of one self”, when the latter has to look at several hypotheses and decide which one is the most in line with the situation that is the object of the analysis. In that respect, the Matrix Games of Deterrence approach is here as well consistent with the Analysis of Competing Hypotheses (ACH) proposed by Richard Heuer as a tool for intelligence Analysis.
4) Games of Deterrence and Bayesian Networks
So, the matrix Games of Deterrence propose an alternative to the classical approaches used in Intelligence Analysis, at both levels of preference elicitation and inference schemes. Now, with respect to the latter, many of these standard approaches are based on Bayesian Networks. So two different, albeit related, questions are to be raised:
1. if for Intelligence Analysis, Games of Deterrence seem to be an interesting alternative to Bayesian Networks, are there some connections between them?
2. if such is the case, can these connections be exploited in the Games of Deterrence approach of Intelligence Analysis?
To answer these questions, the first thing is to look at what seems a priori a major difference between the two approaches: the logical background. As their name indicates; Bayesian Networks are based on probability laws satisfying the Bayes Theorem concerning conditional probabilities. In other words the logic supporting Bayesian Networks is not a binary one, on the opposite of the non-fuzzy matrix Games of Deterrence introduced above. But, in fact this is not a problem, since the development of Games of Deterrence has been extended to fuzzy games, in which the strategies playability indices can take any value between 0 and 1.
Now, LEILA shows that, beyond the fact that there are various examples of cases in which a formalism used to develop a particular theory, may also be used to develop another one, the “fuzzy” extension of Games of Deterrence enables to position Propositional Logic as a potential bridge between the two theories. This bridging position appears quite well for instance, when considering that the implications which are resorted to in Bayesian Networks, can be easily translated in terms of rebuttals, as used in Graphs of Deterrence representation.
One particularly interesting issue pertaining to the possible connections between the two associated graphical models and more specifically to the connections between the two associated graphical models, is the one of the priors, i.e. the probabilities that are associated with the roots in the Bayesian Networks representation. Indeed, in the Graphs of Deterrence representation, it is assumed that a root, i.e. a node with no ascendant, corresponds to a proposition that is true, and hence its playability is necessarily equal to 1. This seems a priori to bring evidence of a disruption between Bayesian Networks and fuzzy matrix Games of Deterrence. Now, it has not been proved that given a node, its probability in the Bayesian Networks representation should take the same value as its playability in the Graph of Deterrence representation. Just, if a proposition is true, both its probability in the Bayesian Network representation, and its playability in the Graph of Deterrence representation, equal 1. To deal with that apparent contradiction, LEILA develops a method based on the concept of “Hidden Part of the Graph”. The idea is in fact that if the prior associated with a root of the Bayesian Network doesn’t equal 1, it means that its associated node in the Graph of Deterrence representation is not really a root but has antecedents which make its playability different from 1. Moreover, the connection between the two representations made by LEILA shows that the value of the playability deriving from the value of the prior provides information about the antecedents of these nodes. Furthermore, the various cases addressed by LEILA show that this information may be “structural” in that sense, that it may state some possible structures for the hidden part of the graph.
5) Playability and probability laws
The results obtained, as exposed here above, do not solve all the problems pertaining to connections between the Bayesian Networks approach and the Games of Deterrence approach, but they pave the way for further explorations. This is what LEILA has done, structuring the comparison between the two approaches, with respect to the existing probabilities laws (in particular, the ones known in the literature as the Kolmogorov laws). Among other things, this has led in particular to resort to another existing extension of matrix Games of Deterrence, in which the players may select more than a single strategy, whence the name of Multi-Strategy Games. A multi-strategy of a player is thus a subset of that player’s strategic set with which is associated a rule concerning its playability with respect to the playability of the strategies composing that subset. More precisely, two categories of multi-strategies have been distinguished:
• conjunctive multi-strategies, which are considered playable if and only if all strategies composing the subset are playable
• disjunctive multi strategies, which are considered playable as soon as one of the strategies composing the subset is playable
This introduction of these two categories has already enabled LEILA to establish connections between:
• products of probabilities and conjunctive multi-strategies
• sums of probabilities and disjunctive multi-strategies
Main scientific results
On the whole, the research developed by the LEILA project on conceptual foundations, has enabled to:
• clarify and structure the set of decision biases and cognitive biases that need to be taken into account in the field of activity pertaining to intelligence analysis
• propose an innovating artificial intelligence approach for information analysis, based on a particular category of games (in the sense of Game Theory), which can address issues pertaining to preference elicitation and analysis of competing hypotheses, inference schemes development and analysis develop an analysis of the relations between that Games of Deterrence approach followed in the project and the more standard Bayesian Networks approach (this has resulted in particular in the publication of a chapter called “Bayesian Networks and Games of Deterrence” in a volume of the Static & Dynamic Game Theory Foundations & Applications, entitled “Recent Advances in Game Theory and Applications”, Springer Verlag (2016)
Design of learning and educational experiences (WP4)
Here our efforts have concentrated on providing a sound learning methodology to the LEILA Learning Experiences and on producing iteratively the final design of the Serious Games we implemented in WP5, going through 2 Pilot phases which significantly improved the initial design and helped us to discover and validate new deployment opportunities. The Figure below provides an overview of the work completed in WP4, emphasizing the associated deliverables and the progress resulting from the Pilot phases:
Figure 7: WP4 Overview
LEILA Learning Methodology (T4.1 and 4.2)
The conceptual framework underlying the design of the LEILA Learning Experiences has been described in the first year Deliverable D4.2. The key elements of this framework have remained unchanged as a basis to drive the design of the LEILA Learning Experiences. Nevertheless, exposing target learners to the first Prototypes reflecting our initial design generated a large number of insights on the Prototypes themselves and on their potential deployment in different target contexts (LEA students and professionals, non-LEA students and professionals for which high quality individual and collective intelligence analysis is critical). This lead to the re-design of the LEILA Learning Experiences described in D4.7 and the implementation of a new set of Prototypes described in D5.2 and D5.3. This redesign was grounded on the same conceptual framework, but 3 additional elements emerged as worth focussing on in redesigning the Learning Experiences:
(1) Modularity of Learning Experiences
(2) Focus on Collaborative Learning
(3) Focus on Information Structuring and Inference Schemes
These 3 elements which determined the redesign of the LEILA Learning Experiences are described in D4.6. The implications of adding these 3 elements to our LEILA Conceptual Framework are directly visible in the new Design Deliverable D4.7 as well as directly in the new version of the Prototypes (described in D5.2 and 5.3) validated successfully during the second Pilot Round (see D6.4 and D6.5).
Designing the LEILA Learning Experiences (T4.3 and 4.4)
After the first Design Phase and Pilot Round in Summer 2015 the LEILA Learning Experiences were significantly re-designed to reflect the new priorities/focus described in the Final LEILA Learning Methodology Definition deliverable (D4.6). Major revisions had to be done both on the LabRint Learning Experience. But the most important revisions of our initial design assumptions concerned the VUCA and WhataTeam Learning Experiences, as we report in the following section, as it is important to understand the developments in WP4 during the second part of the project.
As described in D4.6 the first round of piloting stimulated a redesign and enhancement of the initial Prototypes into a set of independent but inter-related VUCA Learning Modules addressing the 24 Critical Competences identified and validated during the first pilot round.
In order to cover all the competences targeted, the consolidation of the initial prototypes led to the definition of 7 Modules addressing:
(1) Traditional Competences addressing Individual Performance (see the first column of the Table below).
(2) Emerging VUCA Competences (listed in the lower part of the first column).
(3) Competences addressing Collaborative Performance at the Team level (second column)
(4) Competences addressing Collaborative Performance at the Organizational level and beyond, i.e. when operating in cross-unit or cross-organizational contexts (third column)
Figure 8: Classification of 24 Critical Competences
When it comes to “Traditional” Competences (those who are critical to guarantee high-quality Intelligence Analysis built on correct inferences and possibly free of cognitive biases), this is an area which is extensively covered by the LabRint Learning Experience. In the initial design of the VUCA Learning Experience this role was played by the FIFA WorldCup Game, which is based on an interesting but simple crisis scenario (details on this component can be found in D4.3 and D4.4). During the first pilot round the prototype was appreciated, but rather as a “basic exercise” or an “ice-braker”. We therefore decided to not focus much further on this component, and package it as a stand-alone “exercise” (particularly for students, as professionals might find it too basic), to be deployed as a “warm-up” before challenging the learners with less basic and much more complex Learning Modules (→ VUCA FIFA Learning Module).
When it comes to addressing VUCA Competences (that were confirmed as really critical by the participants of the first pilot round) we decided to address them through 2 VUCA Learning Modules, a relatively simple one, the VUCA EQ (Estimate Quality) Game Module, and a more complex one, the VUCA WaD (WhataDay) Simulation Module. The first one uses a playful approach to address the issue of Quality when producing, using and consolidating Estimates (which was confirmed to be an extremely important competence for intelligence analysts operating in VUCA contexts characterized by high level of uncertainty and ambiguity) and the trap of Overconfidence. The VUCA EQ Game Module was therefore developed further based on the insights gained during the first pilot rounds. A number of special versions (French, Telco, Energy/Oil&Gas) were developed to expose also other target users to this Module, which revealed to be one of the most successful VUCA learning Modules during the second pilot round too. The second learning module addressing VUCA competences is the one based on the WaD Simulation, which was already rated extremely high during the first pilot round (see Figure below). Here we concentrated on developing additional material to support the debriefing of the simulation experience (2 videos and an interactive Online Debriefing players access after completing the Simulation, the automatic real-time generation of Reports providing feedback to both the learners (to support self-assessment and self-awareness) and the Facilitator/Trainer (to support debriefing sessions with a group).
When it comes to the competences addressing Collaborative Performance in Teams (which was confirmed as a very important trend on how Intelligence Analysts will increasingly work in future), most of the effort went on enhancing the VUCA WaT (WhataTeam) Simulation Module. Also because this is the component tested during the first pilot round which got the highest ratings in terms of learning value (see Figure above), even higher than the VUCA WaD (WhataDay) Simulation Module, which was very highly appreciated by all the LEA participants of our first pilot rounds. We also decided to keep the VUCA TV (team Values) Game Module, although without focussing on further developments of this prototype besides full testing, debugging and packaging as a stand-alone “exercise” that teams might want to engage into, particularly after having completed the VUCA WaT (WhataTeam) Simulation Module. This second module addressing collaborative performance in teams provides a basis for future developments, particularly if demand for the competence of operating effectively in diverse and distributed teams will increase in future.
Finally, when it comes to understanding how to increase performance through effective collaboration at a the organizational and inter-organizational level, this was recognized as another very highly rated competence that Intelligence Analysts admit it easily is not really well-developed yet in most Law Enforcement Agencies, which tend to operate separately and non-collaboratively, even among units of the same organization – leading to problems like the one we have pointed to in the FBI – 9/11 narrative. We therefore focussed on developing 2 Modules addressing this set of critical competences: the VUCA CB (Collaboration Barriers) Game Module (which was developed further and enhanced by a Collaboration Diagnostics Tool (in English and French) that learners can apply to their own organizations, as well as by the production of video components supporting its online deployment as a stand-alone module, the debriefing part of the Game, and instructions for running the game in a collaborative rather than individual way. The second module we decided to focus on and enhance is the LEILA Playground, given the high potential of this tool to expose learners to the concept and the experience of self-driven Collaborative Learning in a community of peers (as described in D4.6). To facilitate the adoption of this type of more complex (because still relatively unfamiliar) learning and knowledge management environment we therefore developed the VUCA Web 2.0 Platform Module, which introduces the LEILA Playground to the learners and helps them familiarizing step by step and generating value from accessing it (to find relevant follow-up content, e.g. after having completed one of the other VUCA Modules, or the LabRint Learning Experience) and engaging in online exchanges with peers.
The Figure below provides an overview of the newly designed VUCA Learning Modules. In Deliverable D4.7 we have described the key characteristics of each Module resulting from the re-design stimulated by the first pilot round by indicating for each one its specific focus, the time required to complete it, other requirements, as well as the structure/flow of each one of the 7 VUCA Learning Modules, which were deployed during the second pilot round documented in D6.4 and D6.5.
Figure 9: Overview of VUCA Learning Modules
Additional Remark: The redesign also determined a new list of implementation tasks to be performed in order to prepare the new prototypes for the second pilot round - see Table in D5.2 and D5.3 concerning the additional implementation work to be completed -in order to update the prototypes to the new design guidelines described in D4.7. The implementation of this work has been documented in D5.2 and D5.3.
Designing Workshops and Curricula (T4.5)
In the Deliverable 4.5 “Interim Workshops and Curriculum Design” we have presented our plans for a first deployment of the LEILA Learning Experiences to be validated in the Pilot Rounds. The insights gained through the first round of pilots stimulated a major redesign of the prototypes (described in D5.2) which were enhanced and re-structured in preparation for the second extensive pilot round documented in D6.4.
The redesign of the prototypes has also had a number of implications on the deployment dimension of the 2 LEILA Learning Experiences, and particularly of the VUCA Learning Experience, making them reach a very high level of deployability in many different contexts (from Serious games which can be deployed to support learning in individuals operating in a stand-alone mode, to Serious Games involving large groups of distributed learners over a long period of time, operating with either traditional, co-located groups, or with distributed learners). These developments and enhancements are documented in the Final deliverable D4.8 with a chapter dedicated to the deployment of the VUCA Learning Experience (and its different Modules), and one on the deployment of the LabRint Learning Experience.
Figure 10: Flexible Deployment of the LEILA Serious Games
Finally, we have included in D4.8 an overview of training standards for LEA Intelligence Analysts and their implications for the deployment of both LEILA Learning Experiences. The last section of the deliverable reports on the deployment perspectives and plans of one of our partners, NDU.
Overall, we have enhanced significantly throughout the duration of WP 4 the potential/opportunities to deploy the LEILA Learning Experiences in different contexts (including non-LEA deployment opportunites), integrating gradually the feedback we gathered from the pilot experiences to increase deployment flexibility without compromising the learning value generated.
Serious games implementation (WP5)
Main activities
WP 5 objective was the implementation of a set of serious games designed as part of the Intelligence Analysts Learning Experiences to enable trainees to acquire the skills and competencies requested by their role. While the design of the Games has been performed in WP4, WP5 has been devoted to the implementation, integration and deployment of all the different components needed for the games development.
The implementation process was driven by a co-creation approach, involving the relevant stakeholders (learning experts, IA training experts, game designers, game developers, end users) in all the phases of the development. This approach ensured a multidisciplinary design of the games (and the Learning experiences) together with the iterative evaluation and piloting of the prototypes that have generated feedback, guidelines and additional contents to fine tune the scenarios, dynamics and elements.
As previously mentioned, the LEILA Learning Experiences target 24 critical competences that have been selected based on the end users requirements and the conceptual foundation described in W3. The games are the main interface to address the learning process and they are all or partially embedded in the different modules that compose the LEILA Learning Experiences.
In addition, to provide intelligence analysts, trainers, tutors with a scalable set of games, a Modular deployment package has been developed.
The games are very flexible and have been designed to allow:
• Design of new Learning Experiences changing the scenario and the Contents
• Customize the Modules and creation of new tools
Main scientific results
There is an increasing interest in the research community and in the IA community in exploring game-based approaches for IA training, especially in USA, CANADA and UK. Some of these approaches are collected in (Lahneman & Arcos, 2014). It appears that serious games have a great potential for training the analysts in the softer and intangible skills (e.g. cognitive biases, critical thinking, broader reasoning strategies) that are difficult to formalize, rather than the more operational skills. Progresses have been made on the subject of cognitive biases for serious games for critical thinking (Flach et al. 2012), and more generally on the use of serious games in "softer" dimension relevant to the needs of the analyst, may also contribute to the consolidation of this adoption. According to researchers at the College of Information Sciences and Technology, Penn State (Kretz & Granderson, 2013; Kretz, Simpson, & Graham, 2012), game-playing may help intelligence analysts to identify biases that can cloud decision-making and problem-solving during life or death situations in IA. In addition, through games participants can learn how to mitigate cognitive biases (Mersch et al., 2013). Dunbar and colleagues (Dunbar, Miller, et al., 2013; Dunbar, Wilson, et al., 2013) and (Mersch et al., 2013) demonstrated that games-based approaches can effectively drive to a mitigation of some cognitive biases in the IA practices, like the confirmation bias and the fundamental attribution error. Under the SIRIUS programme several games like MACBETH, HEURISTICA, MISSING and CYCLES have been funded to investigate the effectiveness of game-based approaches to train biases awareness and mitigation.
Initiatives such as IARPA SIRIUS research programme (IARPA 2011) specifically aiming at investigating the effectiveness of game-based approaches to train biases awareness and mitigation ((i.e the games MACBETH, HEURISTICA, MISSING and CYCLES), or the graduate programme of Intelligence analysis at Mercyhurst College, promote to a wider adoption of serious games to train the intelligent analysts. Still few European research projects are addressing the use of IT Tools and Serious Games in Intelligence Analysis training, like the games developed in the context of the L4S - Learning for Security Project (SEC/ICT – 225634).
LEILA approach makes good use of all of these experiences with serious games by addressing the cognitive needs of intelligence agents as well as all the enablers that will allow them to work better individually and in a team in order to come up efficiently from better to right solutions of their intelligent analysis issues.
The science and technology supporting the LEILA Games includes a variety of components, corresponding to the diversity of expected learning outcomes contributing to define how intelligence analysts should think. Thus cognitive biases are addressed at the individual level by cognitive and experimental psychology and at a more collective level by anthropology. Likewise critical thinking has been introduced through the above fields but also through principles of logical reasoning (essentially with propositional logic), and conclusions are supported by a combination between inference schemes stemming from the application of formal models of logic and decision making tools, essentially a particular field of Game Theory called Games of Deterrence which have already been used in modelling and assessing argumentation.
The knowledge produced in WP2, introduced concepts like the thinking disposition (the ability to think creatively and critically) that have been integrated into the scenarios and elements of the game-based experience.
These scientific roots related to Inference Schema engine resolution developed under WP3 serve at a “back end” level, which means that they are totally integrated into the LEILA Learning experiences and games. At the “front end” level, technologies classically pertaining to the design and development of serious games like storytelling, system’s architecture, programming, immersive reality, NPC development, graphical user interface, have been used.
During the LEILA project, ad hoc serious games have been designed and developed in order to support the training of the needed competences for addressing intelligent analysis tasks in the proper way, for instance, by allocating the right timing and attention to really relevant issues in typically stressful, “noisy” environments where even collaboration among team members could be difficult because of time pressure, different and sometimes hidden agendas (trust is at stake) and where prioritisation of tasks and identification of key issues may put the intelligent agents in front of difficult decisions in terms of problem escalation, empowerment and self-confidence, delegation. This is fully in the spirit of Sickels (2009) findings which devise the move of the intelligence analysis community towards a fully collaborative enterprise model as the agents cannot work isolated anymore and they need to be provided of the means and tools for developing adequate competences.
In terms of scientifical and technological results, two games have been developed as part of the LEILA learning experiences:
• LabRint - providing a unique opportunity to improve:
o Rational Thinking
o Thinking disposition
o Creative Attitude and Open Mindedness
o Awareness and Mitigation of the most relevant Cognitive Biases
Under a general mission (typically “analyze and determine if the presented event is connected with the XYZ hypothesis), the learner is engaged in a series of subsequent tasks, under time pressure. The tasks are addressing specific competences of the intelligence cycle process, and embed cognitive biases designed to trap the player into incorrect reasoning and conclusions.
The game play provides tools and spaces to improve the analysis tasks, empowering the player with an innovative environment supporting the acquisition of the three competences identified in WP2, WP3, and as relevant for the IA practice and the awareness and mitigation of cognitive biases.
The LabRint game environment is designed in order to support experiential learning and reflection. The game is structured to enable fast design of additional scenarios. During the project, two scenarios have been implemented: The Brossua Challenge and The Cyberint Challenge.
Figure 11: LabRint game
• VUCA - Addressing and gain new insights in Critical Competence Areas related to Individual & Collective Intelligence Analysis relevant to professionals operating in LEA and non-LEA contexts:
o Operating Effectively in VUCA Contexts (VUCA: Volatility, Uncertainty, Complexity, Ambiguity)
o Understanding and Developing Critical Competences like:
▪ Quality & Calibration when operating with Estimates (e.g. for Risk Assessments)
▪ Intelligence Analysis and Decision Traps when operating under Time Pressure
▪ Collective Intelligence in Diverse and Distributed Teams
▪ Critical Behaviours when operating in Teams Across Boundaries (organizational, national)
▪ Collaboration Barriers (individual, organizational) and How to Address Them
Figure 12: VUCA game
VUCA consists of 7 independent but interconnected Learning Modules:
A first set of VUCA Learning Games focus on Individual Performance in Intelligence Analysis. They are:
o Estimate Quality (EQ) Game
o FIFA Soccer World Cup Security Crisis Scenarios
o VUCA WhatADay! Simulation Game
A second set of VUCA Learning Games focus on Team and Organizational Performance in Intelligence Analysis.
o WhatATeam! Simulation Game
o Team Values & Performance Game
o 9/11 Collaboration Barriers Game
In addition, the LEILA VUCA Web 2.0 Playground provides a state-of-the-art platform for accessing selected resources (people as well as Knowledge Assets) related to Individual & Collective Intelligence Analysis and the 24 Critical Competences addressed by the VUCA Games.
The interest for the Serious Games we developed in the LEILA Project, combined with the feedback gathered systematically during the pilot rounds has provided us strong evidences that we have been able to identify really critical competences to address with our Learning Modules.
Evaluation and Piloting (WP6)
Main activities
Piloting activities were designed and executed in order to evaluate, redesign, implement and move towards the validation of the final components created in LEILA project. More specifically, two Pilot phases, namely Pilot A and Pilot B, were organized which included both multiple traditional workshops and online pilot activities. Based on the adopted evaluation methodology (described in D6.1) we employed the following tools for each Pilot phase:
• Pilot A: Two questionnaires including 15 questions were designed, one for LabRint and one for VUCA respectively. Following the design of the aforementioned questionnaires, the Pilot Metrics and KPI’s were defined. For each metric and respective KPI, a threshold value was determined as the minimum that had to be achieved by average of the participants’ responses to signify success in each of these metrics. This was done by associating each KPI to one or more questions of each questionnaire. Furthermore, each of the VUCA learning experiences had additional, embedded, questionnaire (more information can be found in D6.2).
• Pilot B: Minor changes were implemented to the questionnaire used in Pilot A while Metrics and KPIs remained identical. The aforementioned questionnaire was used to evaluate LabRint: The Brossua Challenge (first scenario). For the evaluation of the second scenario, namely The Cyberint Challenge, the Core Elements of the Gaming Experience (CEGE) questionnaire, tailored to our Learning Experiences was used. Furthermore, the main factors examined by the CEGE, Enjoyment, Frustration, CEGE, Puppetry (consists of: Control, Facilitators, Ownership) and Video-game (consists of: Environment and Game-play), acted as KPI Metrics (more information can be found in D6.4 & D6.5).
Throughout the lifecycle of the project, 8 pilots for LabRint and 13 for VUCA were organized including multiple workshop-like pilots in 4 EU countries and online pilot activities with more than 400 LEAs and Non-LEAs participants.
Table 1: LEILA - Pilot Activities
Main scientific results
The outcome of this WP is presented it two steps, firstly the Pilot phase A in which participants evaluated the prototypes while at the same time they provided fruitful comments and suggestions, based on which the second version of the Learning Experiences were designed and evaluated during the Pilot phase B.
During Pilot A, 67 pilot users/trainees in total experienced the LEILA Learning Experiences (Prototypes) developed in WP5. The qualitative and quantitative analysis was divided into two main groups (different data-sets) based on the learning experience pilot they participated: LabRint and Vuca. The analysis revealed that the prototypes were well perceived from both LEAs and Non-LEAs participants, more specifically:
LabRint analysis key takeaways:
• 19 Information Analysts participated in Pilot A.
• Mostly males (more than 70%)
• The majority of the questions (14/15) achieved more than a 60% agreement rate. (Easy to follow Navigation, Selected scenario reflects the role and impact of individuals in IA cycle, Learning Experience is understandable by trained IA, etc.)
• After the end of the pilot, participants reported having a better understanding on how to combine & analyze information, on how to combine various types of data, to not rely on past information and to avoid ignoring former information and generating hypothesis, predictions and conclusions
• All KPI metrics exceeded the baseline values by at least one unit, in a 5 unit scale (figure 13).
• Female participants think that LabRint is more relevant to IA cycle & processes and more applicable to every day IA demands
• Younger age groups (31-40) tend to think that LabRint is efficient in regards of synthesis, merge, evaluation and organization of the information, compared to older age groups (41-50)
Figure 13: LabRint - Pilot A KPIs
Even through the quantitative analysis was very positive, the qualitative analysis depicted that there is room for improvement for both LabRint and VUCA. The feedback received was the cornerstone of the second version of both learning experiences as described thoroughly in WP4 and WP5.
VUCA analysis key takeaways:
• 47 LEAs and Non-LEAs experienced all 5 modules during Pilot A.
• More than 50% were between the age of 31 and 40
• More than 70% were males while 26% were females
• The majority of the questions (14/15) achieved more than 60% of agreement statements (Different learning objectives were successfully divided into manageable learning modules, Reflected situation may occur in IA practice, Learning Experiences supports different levels of experience etc.)
• All modules were highly rated based on the embed questionnaires (Figure 14)
In depth information and the complete qualitative and quantitative analysis regarding Pilot A can be found in D6.3.
Figure 14: Pilot A - Evaluation of WaT & WaD
During Pilot B, 376 pilot users/trainees in total experienced the LEILA Learning Experiences developed after the second phase of development. The analysis was divided into two main groups (different data-sets) based on the learning experience pilot they participated: LabRint and Vuca. To better understand, analyse, evaluate and validate the data collected throughout the Pilot phase B.
In depth information and the complete qualitative and quantitative analysis regarding Pilot B activities can be found in D6.5.
LabRint - The Brossua Challenge results:
The Brossua scenario was evaluated by LEA and Non-LEA IA in Greece, Italy and France.
From the analysis of the questionnaires it was shown that the second version of the learning experience was well perceived by the participants, more specifically:
• The majority of the questions (10/15) achieved more than a 60% of agreement rate (Navigation follows a structured process, It is easier to manage info when big datasets are broken down into structured elements, Selected scenario reflects the role and impact of individuals in IA cycle, Learning Experience is targeting both novice & expert users etc.)
• All KPI metrics exceeded the baseline values by at least half unit, in a 5 unit scale (Figure 15).
• After the end of the pilot, participants reported a better understanding on how to combine & analyze information, on how to not rely on past references, on how to organize collected data, on how to avoid relying heavily on past information
• Results from the statistical inference tests, showed more or less these results to be significantly positive.
Figure 15: LabRint - Pilot B KPIs
Based on the outcome of the statistical tests, the thresholds set for all KPIs have been surpassed, meaning that the participants have positive view towards LabRint Learning experience which can also be confirmed by the feedback received at the end of the pilots. A correlation analysis of the KPIs vs Analyst Status (if the participant is LEA IA or non-LEA IA), saws that non-LEA IA tend to believe more than the LEA IA that LabRint learning experience can have application to the everyday demands of IA.
LabRint - The Cyberint Challenge results:
The second scenario of the LabRint Learning experience was validated by 29 in trainees/participants, from which the 25 where Romanian and Hellenic LEA IA. The evaluation questionnaire that was used was a tailor-made adoption of the “Core Elements of the Gaming Experience”, to better tackle the needs of the project.
The analysis of the questionnaire validated that LabRint was highly rated by the participants. More specifically:
• All the question structured negatively, 7 in total (e.g. I did not like LabRint’s scenario) had more than a 70% of disagreement rate
• 18 out of 22 positive structured questions had more than a 60% of agreement rate
• The analysis of the KPI Metrics further validated LabRint, as the mean of the enjoyment KPI was 6.07 and the overall CEGE questionnaire was 5.74 (out of 7 in both cases) (Figure 16).
• Participants from Romania scored significantly higher that Greek participants in Enjoyment
• LEA IA score significantly higher than non-LEA IA in Environment assessment of LabRint, while there is no significant statistical difference regarding the rest of the KPIs.
• No statistically significant KPI differences can be identified between genders and between different age groups.
Figure 16: Mean of the overall questionnaire - CEGE
The overall feedback received during the round table discussion provided valuable feedback which validated the effectiveness of LabRint:
“The LabRint game addresses some important skills required for the analysis of large sets of information (e.g. the ability to distinguish relevant from irrelevant information, pay attention to details, evaluate the link between information and hypotheses, etc). As such, I see the game as a potentially valuable tool for training intelligence analysts working in different sectors”
“It was a good experience and it helps for learning to understand and distinguish the important infos from the useless about the case we face”
“I strongly believe that the concept and LabRint itself was worthwhile. It would be interesting if we could do it in groups/teams. It would be beneficial if we could continue training using LabRint in organization or individual level”
“Challenging and stimulating experience. I think its major educational value is the fact that it induces the player to concentrate on relevant details as well as to structure the available information before making the assessment”
VUCA results:
During Pilot B, the VUCA EQ, CB, WaD and WaT modules were piloted and evaluated by LEA and non-LEA participants.
• The EQ Game Module validated the importance of the “traditional competences” related to VUCA as well as the efficiency in terms of time and effectives in terms of the learning value generated through the Learning Module. Furthermore, we were also able to test and validate that EQ can be modified to address the specific needs of the organization using it (e.g. LEA IA, Telecom) and can be deployed 100% online.
• The CB Game was highly appreciated, based on the quantitative feedback and qualitative comments from both LEA and non-LEA participants.
• The VUCA WaD module was also highly appreciated by both LEA and non-LEA participants (Figure 17 and Figure 18).
Figure 17: VUCA WaD LEA qualitative feedback
Figure 18: VUCA WaD non-LEA quantitative feedback
The VUCA WaT module was highly rated by LEA users during Pilot A while the main criticism was the length of the deployment (ideally 7 weeks). During Pilot phase B, we also tested the effectiveness of the module if deployed in just 3 weeks of online exchanges followed by a traditional/onsite 1-day debriefing session:
• The WaT module was highly rated by the participants as Figure 19 shows, even though that the length was significantly reduced. The mean value surpassed 4 out of 5.
Figure 19: WaT - 3 week’s deployment feedback
The evaluation from the deployment of the module over a 7 weeks period with 3 integrated Webinars of 1,5 hours to debrief the 3 dilemmas of the VUCA WaT Simulation Module with a group of crisis managers, responsible to manage teams distributed collaborators shows that the module and this kind of training is highly rated as the qualitative feedback below shows:
“Sky is the limit - I mean this is great experience. The scenarios provided are perfect examples. They are open and made me think. Overall I am satisfied with this experience”
“learn how to bond team's idea and communicate as one single answer. experience how to form teams and adhere them into a single objective manage time zones use actual resources provided by company acceptance of hard topics and its final decision”
“The interaction by team members contributing to reaching a concensus to solve a problem. Teamwork is the key!”
Potential Impact:
Potential impact (including the socio-economic impact and the wider societal implications of the project so far) and the main dissemination activities and the exploitation of results
The potential impact
The potential impact of the project to the primary target audience, namely the LEA, defense, public order and National security training as well as the educational system is achieved through the developed Learning Experiences, the methodologies created for the improvement of the educational curricula, the synchronization and harmonization between the requirements of LEA IA, Crisis managers and decision makers coming from fields like Critical Infrastructure Sectors, which is EU’s backbone of security, health and economy. As a consequence we have identified three channels which the project impact is achieved:
1. Through the adoption of the project’s Learning Experiences for training purposes
2. Through the adoption of the created methodologies
3. Through the awareness of the project results by National and EU stakeholders
Furthermore we have identified the following groups of interest:
Primary group of interest: This group encompasses the LEAs with the mission to protect society and prevent terrorist incidents and the IA community.
Secondary group of interest: Groups identified in the public and private sector (e.g. other first responding organisations, Critical Infrastructure section) that were interested in LEILA solutions and they can utilize the Learning experiences and methodologies to enhance their training.
Tertiary group of interest: This group includes organisations that their expertise is supplementary to the previous ones, and therefore can influence their operation. These groups are researchers, IA trainers and course designers, academia, security experts, and organisations currently engaged in related EU or other projects.
Table 2: LEILA's Groups of Interest
Dissemination activities
The goal of the dissemination strategy is to reach awareness and impact not only in the 4 countries involved in the project (Greece, France, Italy and Romania), but also to address additional target countries, through the leverage of the partner’s networks, participation to events, pilots and dissemination meetings.
Figure 20: LEILA's Geographical Impact
Dissemination progress at a glance
LEILA project Website
The website of the project was up and running from June 2014 and in total has received more than 2.200 visits since its creation. The website provided dissemination of the project’s progress and results. All approved deliverables were accessible while the News section was updated following project’s events. Dedicated section including not only information to the LEILA Learning Experiences but also access to LabRint and the respective User Manual as well as sections presenting the project’s publication and public documents have been created to maximize the impact of the LEILA Learning experiences.
Figure 21: The LEILA website
Leaflets
Leaflets describing the project’s goals and aims and most importantly detailed information regarding the Learning Experiences, namely VUCA and LEILA as well as the LEILA Playground have been created and distributed in every event LEILA has either organized or participated.
Promotional Video
To promote the project, a video has been produced, summarizing the results, the pilots and presenting the Leila learning experiences. After the final event, a new release of the video has been produced, CONTAINING THE FINAL EVENT VIDEO AND RELATED CONTENTS.
Figure 22: The LEILA video
Pilots
Considering the very specific target beneficiaries of the LEILA research (the LEA IA), the best action to promote the project’s outcomes is during live trials where they could test (and co-design) the Learning Experience. The most relevant Key players and stakeholders in the domain have been involved in several countries. The continuous engagement of the users was very productive, not only in terms of co-design, but also as occasion to raise awareness and promote the adoption of these innovative solutions in their training practice.
Figure 23: LEA and non LEA IA during the LEILA trials
Publications
With regard to the scientific publications, 3 peer reviewed publications, 4 scientific papers have been accepted in conferences and published as part of the proceedings as well as 1 book and 4 articles/book chapters (Table 2).
Table 3: LEILA Publications
Other activities
In addition to the aforementioned activities, 36 other type of dissemination activities have been recorder, including project presentations, participations to conferences, expos and infodays, press releases and pilot demonstrations. Furthermore, more than 50 targeted networking activities (e.g. personal meetings, skype calls and email exchanges) to raise awareness and disseminate LEILA’s results to targeted selected individuals. To that end, the majority of researchers and stakeholders in the domain have been identified and informed/involved in the project’s phases.
Figure 24: LEILA results and demos of the learning experiences was presented during several meetings with teachers and trainer professionals from UK, Austria, France, Romania, Portugal, Greece, Italy, Norway, Slovakia, Slovenia, Germany, Estonia and Spain)
LEILA’s final event
During LEILA’s closing event the project, the final version of “LabRint: The Brossua Challenge”, the VUCA Estimate Quality (EQ) module and the latest version of LEILA’s Playground were presented. Fifteen participants from different division of the Hellenic Police as well as security expert from the Aegean University had the opportunity to train with LabRint and VUCA EQ. Furthermore, at the end of each session participants were engaged in discussions regarding their experience and thoughts.
Figure 25: LEILA's closing event
List of Websites:
http://leila-project.eu/
Kavallieros Dimitrios (KEMEA): d.kavallieros@kemea-research.gr
George Leventakis (KEMEA): gleventakis@kemea.gr
Albert Angehrn (Alphalabs): Albert.ANGEHRN@insead.edu
Louis Ferrini, Susanna Albertini (FVA): fvaweb@tiscali.it
Michel Rudnianski (ORT): michel.rudnianski@wanadoo.fr
Iulian Martin (NDU): imartinwork@yahoo.com
Alexander Cappos (Globo): acappos@globogr.com
Alessandro Zanasi (Z&P): alessandro.zanasi@zanasi-alessandro.eu