Architecture for the Recognition of thrEats to mobile assets using Networks of multiple Affordable sensors
The objective of ARENA is to develop methods for automatic detection and recognition of threats, based on multisensory data analysis. Research objectives include:
o To robustly and autonomously detect threats to critical mobile assets in large unpredictable environments.
o To reduce number and impact of false alarms towards optimized decision making.
o To demonstrate automatic threat detection for the land case (truck).
o To demonstrate an integrated, scalable and easy to deploy monitoring system.
o To assess automated threat detection for the land case (train) and the maritime case (vessel, oil rig).
o To evaluate detection performance and contribute to standards.
o To respect and respond to social, legal and ethical issues arising through the design, implementation and deployment.
ARENA will investigate different platforms including trucks, trains, vessels, and oil rigs (with the real demonstration focused on trucks). This will allow assessing the level of similarities between different cases and applications.
ARENA has a stakeholder group which consists of representatives from the land case and the maritime case. The stakeholder group will play a pivotal role in the user requirements, threat analysis, scenario definition, evaluation and demonstration.
164 90 Stockholm
Higher or Secondary Education Establishments
€ 768 825
åsa Waern (Ms.)
Sort by EU Contribution
BMT GROUP LTD
€ 415 526
ITTI SP ZOO
€ 310 500
SAFRAN ELECTRONICS & DEFENSE
€ 490 459
IDEMIA IDENTITY & SECURITY FRANCE
€ 167 604
NEDERLANDSE ORGANISATIE VOOR TOEGEPAST NATUURWETENSCHAPPELIJK ONDERZOEK TNO
€ 498 878
THE UNIVERSITY OF READING
€ 463 408
PRO DOMO SAS
€ 63 561
Grant agreement ID: 261658
16 May 2011
15 May 2014
€ 4 861 867,60
€ 3 178 761
Surveillance system to automatically detect security threats on land and at sea
Grant agreement ID: 261658
16 May 2011
15 May 2014
€ 4 861 867,60
€ 3 178 761
Final Report Summary - ARENA (Architecture for the Recognition of thrEats to mobile assets using Networks of multiple Affordable sensors)
ARENA (Automatic detection and recognition of threats to critical assets in large unpredictable environment) within the 7th framework program have designed a flexible surveillance system concept for detection and recognition of threats towards deployment on mobile critical assets/platforms such as trucks, trains, vessels and oil rigs.
ARENA has been developed specifically to address the concerns posed by the growing threats of piracy, hijacking and theft on board mobile platforms. ARENA provides a sensor-based surveillance system concept that will provide early identification and evaluation of incoming threats using multi-sensory data analysis from sensors attached to the assets themselves.
The surveillance model robustly and autonomously detects threats to critical mobile assets in large unpredictable environments including: stationary platforms relative to the land, such as a truck or train stop; stationary platforms relative to the sea such as ships in port or oil rigs; mobile platforms relative to land such as trucks or trains in transit and finally mobile platforms relative to the sea such as ships at sea or support vessels around an oil rig.
The ARENA project is presented in a film which can be found on the project website: https://www. ARENAfp7.eu
Project Context and Objectives:
The ARENA project was initiated to design the architecture of a system that will be able to use different methods of data fusion to combine data collected from various sources, providing more reliable results.
The objective of ARENA has been to develop methods for automatic detection and recognition of threats, based on multisensory data analysis. The specific research objectives were:
- robustly and autonomously detect threats to critical mobile assets
- reduce number and impact of false alarms towards optimized decision making
- demonstrate automatic threat detection for the land case (truck)
- demonstrate an integrated, scalable and easy to deploy monitoring system
- assess automated threat detection for other cases (train, vessel, oil rig)
- evaluate detection performance and contribute to standards
- respect and respond to social, legal and ethical issues associated to the monitoring.
The project has defined seven use cases which have been evaluated by the end-user in an end-user workshop.
The evaluation of the use-cases showed that two use-cases covered most requirements from the end-user.
Those use-cases, a truck at a parking lot and a ship under piracy, have been further developed to scenarios which have been the base for the development of the ARENA concept. For practical reasons has the truck scenario got the highest priority and was the target for the final demo. The goal of WP3 was to create a specification of the system to allow implementation and further development of the integrated, scalable, affordable and easy to deploy detection system that can be used for broad range of scenarios and missions related to the critical assets according to user needs and using the most suitable technologies.
A generic system architecture has been developed for the ARENA concept. Algorithms for detection, tracking and recognition of threats in the truck scenario have been investigated, further developed and tested. Two data collection campaigns have been performed to collect testing data that have been used during the development of the generic architecture as well as for the development of specific algorithms. A third data collection campaign was performed close to the final demo to make final adjustments and an evaluation of the concept.
The ARENA concept is a generic surveillance system in the sense that the project has developed a generic architecture. The architecture has been implemented in a testbed to which all modules/algorithms have been connected. The final demo showed a setup with visible and infrared sensors connected to the testbed with models for detection, tracking and threat recognition. The testbed was connected to a Human-Machine- Interface (HMI) which showed the ARENA results. Figure 4 and 5 shows the final system set up and the HMI can be installed either on a desktop to be used at the truck head quarter or on a smartphone to be used by the driver.
The work was subdivided in nine work packages (WP) to obtain the overall goal.
2.1. WP2 Threat analysis and User requirements
WP2 investigated the end-users requirements. This was done through interviews and a requirements workshop with the Stakeholders. The WP developed seven use cases which were prioritized by the Stakeholders.
Use case 1: “Cargo theft detection while parked”
A truck is parked at non-secured parking lot. The ARENAtruck system detects people loitering near vehicle, followed by the actual opening of the cover, to look inside the vehicle.
Use case 2: “Truck in motion”
A truck is in motion between locations. The truck’s onboard GPS tracking system reports its location to the ARENA-headquarters system. During the drive, nefarious individuals interfere with the truck’s GPS tracking system, causing it to report a location that is different to the actual location. The ARENA-system detects a discrepancy between the locations.
Use case 3: “Cruise ship in the port”
A cruise ship makes a stop over night in a port that is situated close to a larger city.
Two persons try to come onboard from the landside to steal the passengers’ belongings and/or to kidnap the crew.
Use case 4: “Piracy attack on ship at sea”
An oil tanker is sailing through the Gulf of Aden (known as “Pirate Alley”).
A group of pirates has captured another large vessel and is using it as a mothership to increase the range of their attacks. The pirates launch 2 skiffs (small, fast vessels) from the mothership, each carrying 3 pirates. Their aims are to board the tanker, take the crew hostage, and demand a ransom for their release.
Use case 5: “Hijacking of trains or service vehicles and hostage taking”
Train is on service. A terrorist succeed to enter the driver cabin and forces the driver to let him drive the train and force him to leave the cabin. The ARENA security camera takes a picture of the driver and tries to match the driver face with a database of authorized driver. The driver face is not recognized.
Use case 6: “Oil rig terrorist attack”
An Oil rig is drilling in North Sea.
A small vessel approaches the oil rig. No AIS signal is sent by vessel. The ARENA system alerts oil rig security officer, who follows the vessel.
Use case 7: “Container”
Container with the cargo is transported by road and/or sea. The possible threats are:
• theft of the cargo and truck,
• theft of the cargo,
• devastation of the cargo,
• breaking into the container and smuggling prohibited goods,
• tie-up truck (especially important when transporting goods with short shelf life), spoilage of the goods due to poor transport conditions (transport time, too rapid speed changes),
• the container drowns.
Use case 1 and use case 4 were the two highest prioritized use cases and they were further developed into scenarios. Those scenarios were used as the framework for the development of the ARENA concept. Moreover, the truck scenario from use case 1 was refined and used as scenario at the data collection campaigns and final demo.
WP2 was completed in 2012 with two deliverables
• D2.1 Overview Report
• D2.2 Scenario Report
2.2. WP3 Generic architecture
WP3 has been working on the generic architecture of the ARENA system.
The specification of the architecture has been based on the JDL Data Fusion model. This model assumes dividing functionalities of a data fusion system into following parts: object assessment, situation assessment, threat assessment and some additional technical parts i.e. HMI (process refinement), databases, sensor pre-processing modules and integration components.
Requirements connected with architecture were collected from detailed analysis of the requirements presented in the description of work and collected from end-users. Moreover, the functionality of the system was also described by the scenarios. All information was taken as input for the derivation of architectural requirements and set of design principles and the project team developed a high-level view of ARENA architecture and its exemplary deployment scenarios.
ARENA Node consists of the following internal modules:
• ARENA Repository (AR) module responsible for storing and sharing the data for internal and external modules. The structure of this database will be able to store data that are compliant with low-level data model.
• Sensor Management (SM) module designed for managing the available sensors and sensor platforms and responsible for transfer gathered data to ARENA Repository. This part of the system is not obligatory because each sensor can connect to Integration Platform itself and register as data producer. Therefore, the SM module is better than managing connected sensors through Integration Platform, because SM can separate sensor management (mostly configuration and technical issues) from Integration Platform and core ARENA functionalities.
• Security Management (SEC) module designed for managing the aspects related to security such as certificates requests and managing revocation lists, logs etc. It can be considered as the major part of the PKI (Public Key Infrastructure). Thus the functionalities of this part of system may vary e.g. in case of Certification Authority and Certified Node (e.g. SEC needs to provide more functionalities and options in case of Certification Authority than in case of Certified Node.
• Object Assessment (OA) module responsible for combining location, parametric, and identity information in order to achieve refined representations of individual objects (e.g. trucks, persons, skiffs). Object assessment module will perform the following functions:
o transforms sensor data into a consistent set of units and coordinates,
o refines and extends in time estimates of an object’s position, kinematics, or attributes,
o assigns data to objects to allow the application of statistical estimation techniques, and
o refines the estimation of an object’s identity or classification.
• Situation Assessment (SA) module focused on the development of description of current relationships among objects and events in the context of their environment. Distributions of individual objects (defined by Object Assessment) are examined to aggregate them into operationally meaningful combat units (intruders) and weapon systems. In addition, situation refinement focuses on relational information (i.e. physical proximity, communications, causal, temporal, and other relations) to determine the meaning of a collection of entities. This analysis is performed in the context of environmental information about terrain, surrounding media, hydrology, weather, and other factors. Situation refinement addresses the interpretation of data, analogous to how a human might interpret the meaning of sensor data. Both formal and heuristic techniques are used to examine, in a conditional sense, the meaning of Level 1 processing results. The main activities performed by ARENA in this module will be:
o Fusion of attributes to the tracks,
o Fusion of tracks that represent the same object,
o Estimation of relations between:
- Different tracks,
- Tracks and stationary objects.
o Modelling of the current environment, i.e. the relation between stationary objects in the scene (e.g. type of objects, vegetation, infrastructure, light conditions and weather conditions). Knowledge about the current environment will be used to understand what we can expect from the algorithms (e.g. risk for occlusion, low sensor performance),
o Relations are classified to specific events (e.g. merge, split, deviation from predetermined route, approaching quickly or slowly, leaving quickly or slowly,
- The specific events are related to the scenarios that we will use in ARENA,
- The events will be decided in advance,
- Database of events.
o Semantic translation, i.e. a translation between the events in WP6 (Situation Assessment) to suitable semantics for WP7 (Threat Refinement).
• Threat Refinement (TR) module which projects the current situation into the future to draw inferences about enemy threats, friendly and enemy vulnerabilities, and opportunities for operations. Threat assessment is especially difficult because it deals not only with computing possible engagement outcomes, but also assessing an enemy’s intent based on knowledge about enemy doctrine, level of training, political environment, and the current situation. The overall focus is on intent, lethality, and opportunity. Furthermore, Threat refinement module develops alternate hypotheses about an enemy’s strategies and the effect of uncertain knowledge about enemy units, tactics, and the environment. Game theoretic techniques are applicable for this processing.
• Process Refinement (HMI) module responsible for the following functions:
o monitors the data fusion process performance to provide information about real-time control and long-term performance,
o identifies what information is needed to improve the multilevel fusion product (inferences, positions, identities, etc.),
o determines the source specific requirements to collect relevant information (i.e. which sensor type, which specific sensor, which database), and
o allocates and directs the sources to achieve mission goals. This latter function may be outside the domain of specific data fusion functions,
o provides interface for user (HMI) who can supervise the fusion and change system parameters and data stored in ARENA Repository.
• Ontology module responsible for providing optimal system parameters based on stored contextual data, predefined set of rules and dynamically provided parameters. Ontology is a flexible tool which can be used to store various type of semantic information in hierarchical and manageable structure. Ontology module API allows ARENA components defining new classes, class instances, relations between them and finally adding rules, which can be used to run the reasoning engine. Results of the reasoning can be used to set optimal system parameters depending on e.g. environmental factors, type and quality of the data, historical results and many more. Data repository component of the Ontology module can be used to store information about used algorithms, their limitations, capabilities, risks or optimal values of theirs parameters depending on various conditions. Ontology module communicates with other module via ARENA bus service (implementation of the ZeroMQ message queue).
These modules provide the following services:
• ARENA Repository Service (ARS) - access to ARENA Repository (get/search, store objects),
• Sensor Management Service (SMS) - defining/changing sensor parameters and type of data that should be gathered by sensors, this module will be also responsible for transferring data from external sources to the Arena Repository,
• Security Management Service (SES) – responsible for handling the events related to security mechanisms such as certificates expiration, setting configurations, security policy details etc,
• Object Assessment Service (OAS) - defining/changing properties of OA algorithms,
• Situation Assessment Service (SAS) - defining/changing properties of SA algorithms,
• Threat Recognition Service (TRS) - defining/changing properties of TR algorithms,
• Ontology Access Service (OAS) – providing access to context data stored within the ontology module
ARENA Node will interchange data with external actors/systems and provide them services.
ARENA Node can be connected to the following external systems/modules:
• Sensor - devices providing observation data,
• External System - any external system that could provide data useful in ARENA Node or interested in data that could be provided.
ARENA Node will use the following external services:
• Sensor Observation Service (SOS) which provides observation data from sensors,
• Sensor Planning Service (SPS) which is responsible for scheduling the gathering observation data from sensor and managing their capabilities.
ARENA Node will provide ARS service to the external systems and use OGC sensor services to gather data from them.
WP3 was completed in 2014 with three deliverables
• D3.1 Architecture Specification - Preliminary version
• D3.2 Architecture Specification - Final version
• D3.3 Gap Analysis and Roadmap
2.3. WP4 Testbed and Integration
The aim of WP4 was to design and develop the integration platform – “test bed” - to be used to validate the architecture design defined in WP3 and to allow a relevant demonstration for end users as proposed in WP2. The test bed includes a set of sensors and a hardware platform where software components from WP 5, 6, 7 and the data model from WP3 have been integrated.
The test bed is not a complete prototype of a future monitoring system but an experimentation platform for the processing chain; object assessment, situation assessment and threat recognition.
The test bed is made of:
• Communication network (and required middleware)
• Processing hardware
The final test bed contains the following cameras:
Axis 213 Ptz
The software modules; sensor management, object assessment, situation assessment and threat recognition assessment exchanged information through XML-files.
WP4 was completed in 2014 with two deliverables
• D4.1 Sensors for ARENA test bed
• D4.2 Integration Report
2.4. WP5 Object Assessment
WP5 dealt with object assessment, i.e. with selecting and adapting suitable algorithms for detection, tracking and feature extraction for the ARENA objectives.
The objectives of WP5 were:
• To select suitable algorithms for detection, tracking and feature extraction for the ARENA objectives (the focus was on the sensors that have been selected for the simulation and demonstration in WP8)
• To use as much as possible existing algorithms that can be found in the literature as well as from the partners earlier work and experiences
• To identify necessary further development of the selected algorithms, especially if they can be improved considering robustness from changes in environments (e.g. weather, light conditions), and a low false alarm rate already at sensor level (reduction of false alarm rate will also be considered in WP6)
• Perform the further development that has been identified in the earlier steps
The work commenced with a review of the literature for relevant techniques on object detection and tracking for the defined ARENA scenarios. This resulted in the production of a literature review document as a precursor for Deliverables D5.1 and D5.2.
The work package has focused on challenges related to the two primary scenarios. In either case, there are a number of objects that must be detected, described, and ultimately, tracked, such that subsequent processing layers will be in a position to make reasoned decisions regarding the state of the environment and the potential presence of threats.
In the maritime scenario, the detection of large vessels, probably at some significant distance, is of interest in that pirates are known to make use of stolen large ships as their “mother ships”; mobile long range bases from which they can launch attacks on shipping routes. The ability to detect the presence of large ships would primarily be possible through the traditional tools of Radar and AIS data, however this could be augmented by using visual or thermal sensors, particularly where the images produced by those sensors could be used to classify or identify the detected ship. Detected ships for which the visible appearance is not consistent with an identity broadcast through the AIS system, or whose visible appearance hints at an identity listed on a pirate blacklist, would be a significant indication of potential danger.
Normally these mother ships will not themselves perform an attack, rather serve as a staging point. The actual attack is more likely to be performed using small, fast and manoeuvrable “skiffs” or launches. These boats are typically less reliably detected by traditional radar based sensors, but may be detectable with visible light or thermal cameras. An ability to detect and track the activities of small boats would provide the possibility to prepare the crew of the target vessel against possible attack, even if only a few brief minutes were available. As such, detecting these boats and identifying their behaviour as threatening would be of significant value.
Turning to the truck scenario, it is instantly clear that the primary objects of interest will be people, and determining the behaviour of people in the vicinity of the parked truck to determine if they pose any potential threat to the truck, its cargo, or its driver. It may also be the case that suspicious individuals could be associated to other vehicles in the environment, be this the car the person arrived or departed in, or other trucks present in the scene. To identify the activities of people in the scene, it will be necessary to first detect the presence and location of those people, and also to track them as they move through the scene or recognise any simple actions that they might perform, particularly in close proximity of the truck.
For the land based scenario, it was determined that existing detection techniques involving background subtraction, optical flow and model-based vision are sufficient to identify the presence of the primary objects of interest.
In the piracy case, standard background subtraction approaches are less viable, but a number of techniques have been investigated that show promise for their abilities. Generally, it seems likely that a combination of the approaches that exploit motion saliency (such as temporally stable features) should be combined with an approach that exploits spatial saliency (such as the FFT saliency approach, or the TNO single-frame background subtraction approach) to achieve the best reliability. In this work, the University of Reading has developed a novel saliency based approach with promising results demonstrated on two different datasets. Further work is required to evaluate in a quantitative manner such an approach against competing algorithms and a wider range of representative data.
As work has continued on this project, the balance of activity for this work package has shifted from detection towards tracking and specific tasks including face recognition and mosaicing from a panning IR camera.
The final tracking approach adopted by the University of Reading takes input from a combination of change and motion detection and performs novel reasoning regarding the current state of the scene, how existing tracking targets should be associated to current detections, and how to update the location of current tracking targets. The complementary tracking approach developed by FOI initially fuses output from background modeling and person detection, and then tracks obtained using Global Nearest Neighbor (GNN) for data association.
To ensure a reliable processing speed it may be prudent to combine the fast detection of the background subtraction with the slower model-based techniques, particularly where their strengths and weaknesses are complimentary. Ultimately, determining the best way to combine these paradigms would be a task for future work.
WP5 was completed in 2014 with two deliverables
• D5.1 Detection and Feature Extraction Methods
• D5.2 Tracking
2.5. WP6 Object Assessment
WP6 dealt with algorithms for multiple sensor fusion for obtaining a common and consistent operational picture, based on the data given from WP5.
The purposes of situation assessment for ARENA were to:
• Produce a consistent operational picture around the platform (i.e. truck, vessel, train or oil rig) by using data association and fusion algorithms.
• Detect and recognize events described by interactions/relations between objects, i.e. platforms and people. Events can often be divided into several sub events.
In the maritime domain events can be for example the following:
1. Vessels/small boats approach (or leave) each other.
2. Vessels/small boats move together in the same direction.
3. Vessels/small boats increase or decrease speed
4. Vessels/small boats suddenly change direction.
5. Vessels/small boats approach an area not often visited.
For the truck case the ARENA partners have specified a number of events that are of special interest. These are:
1. Person enters/exits a building or a vehicle.
2. Events involving movements (e.g. a person walks from one specific area to another).
3. Person loiters around a stationary object.
4. Events related to attacks (e.g. a person attacks another person).
The literature survey shows that there are a number of algorithms that can be used to process data at the situation assessment level. Several of them belong to the category of supervised techniques which use training data to obtain the model parameters. A few of them belong to the category of unsupervised techniques which do not need training data to obtain the model parameters. For the ARENA applications, where the platform constantly moves and experiences new environments and surroundings, unsupervised techniques would be preferable. However, it is likely that we also will use supervised techniques to be able to recognise the different type of events. In that case it will be important to design the supervised algorithms so that they are based on data that are as general as possible, i.e. not too much affected by new environments and surroundings.
Uncertainty information is important for the fusion process. For track-to-track fusion it is very important to have access to the covariance matrix for each ingoing track (which consists of uncertainty information) so that the fused track will be of higher quality compared to the ingoing tracks.
One of the important aspects to be addressed in ARENA is the level of false alarms. It is necessary to have a level that is as low as possible so that the system can become useful to the operator. By using a sensor network and fusion algorithms we will have a great opportunity to reduce the number of false alarms. The reason is that each sensor will have a unique view and common characteristics (derived with the different algorithms) from the different sensors will strengthen the idea of a specific object or event of interest.
WP6 has shown in several ways that the algorithms can be used for different environments.
For example, the action recognition algorithm has been learned on the Reading dataset and can then be used on the Sagem datasets, i.e. the algorithm can be used in different environments. The algorithms that are based on object segmentation, i.e. the group detection and zone-based activity recognition algorithms, can both be used for the maritime case (vessel and oil rig cases) to segment tracks/positions and analyze motion patterns of vessels. The track-to-track fusion algorithm can be used for different sensor setups. That is to say, the algorithm is independent of whether the sensors have overlapping filed of views or not. We have developed a situation assessment ontology whose purpose is to store data and information from different environments so that the algorithms easily can switch between the different environments and applications.
Some major results from WP6 include:
• Track-to-track fusion for the land case: We can see that the number of false tracks is reduced to a large extent. We can also see that the camera calibration is very important. It is important to have accurate positions of the single tracks; otherwise the fusion of the single tracks (representing the same object) will not take place.
• Track-to-track fusion for the maritime case: We have developed and evaluated the algorithm on simulated data.
• Group detection: we can see that event fusion will reduce false group detections and improve the certainty of accurate group detections. Event fusion reduces the need for very accurate camera calibration if it is enough that we know that there are groups present. For majority of the tested cases the accuracy of group detection exceeds 70%.
• Zone-based activity recognition: We can see that the approach is generic as it has been successfully tested on different domains with indoors or outdoors scenarios. According to the ground truth, the proposed approach achieves high values of precision and recall. For the majority of the cases precision and recall exceeds 70 %.
• Action recognition: We can see that the action recognition is a very good supporting, ‘weak’ feature set for the threat recognition algorithm (even if itself has perhaps a lower accuracy of 38 %.). Threat recognition is the ultimate goal of the ARENA project, and which shows good results.
• Face recognition: We have developed a face recognition algorithm that can be used for recognizing the driver.
• Ontology: We have developed a situation assessment ontology to facilitate for the algorithms to switch between different environments. The ontology is used for storing information about parameter values for the algorithms and the appearances of different areas (e.g. how different parking lots look like).
Another important result is the successful integration of the majority of the algorithms into the ARENA testbed and also the demonstration of the ARENA system on the demonstration day 16 April 2014. The following algorithms were integrated: track-to-track fusion (1 and 2), group detection, zone-based activity recognition, individual action recognition and face recognition.
WP6 was completed in 2014 with two deliverables
• D6.1 Situation Assessment- Multisensory and situation assessment tools (First version)
• D6.2 Situation Assessment - Multisensory and situation assessment tools (Final results)
2.6. WP7 Threat Recognition
WP7 dealt with threat recognition, using a layered approach in which recognition is performed by the assessment of objects (WP5), the situation, i.e relation and interaction between objects, the environment and our platform (WP6), and the estimated impact. Also, a Human-Machine-Interaction multi-layer interface has been implemented, using user requirements as input.
Threats appear in many variations. For some threats, the key characteristic is the walking pattern, e.g. loitering. For other threats, the cue is the presence in a particular zone, which is considered suspicious, e.g. being present in a place where other people usually are not present. Another category of threats is characterized by the current activity by the person posing the threat, e.g. trying to open a door. In WP7, the objective has been to recognize a wide range of threats, by representing a variety of aspects of human behaviour. The novelty is our intermediate-level representation including a person’s trajectory, presence in particular zones, activities, and generic states of potential threats.
Complex threats are a high-level semantic concept. A threat is an interaction between on the one hand the person or group of persons posing the threat, and the threatened person(s) or object on the other hand. The person posing the threat will try to limit exposure to a minimum. This leads to a complex interaction and the differences of the behaviour compared to other people, who pose no threat, may be very subtle. Together with the variations in which threats may occur, a thorough interpretation of the observed cues is required, beyond simple rules on simple cues. Yet, the popular approach in computer vision for recognizing human behaviour is to start with low-level entities, the most common ones are trajectories resulting from tracking, and hand-crafted features, e.g. STIP. Such low-level features are very useful, because they capture essential details about trajectories, local shape, motion, and they are localized in space and/or time. Yet, they are not directly associated with persons, zones in the scene, a person’s activities, and what happens during a person’s trajectory. Such associations are not trivial: many of the well-performing methods consider the low-level features in the whole video, the whole scene, or in sub-volumes without making explicit associations. Recent attempts for complex behaviours in complex scenes have not been successful yet, although reasonable performance has been reported for simple activities. For threat detection, this is not sufficient: our aim has been to identify who is posing the threat and when that happens. Clearly, there is a huge semantic gap between threat detection and low-level features. Our contribution is that we exploit the advantages of low-level features and bridge the semantic gap to threat detection by an intermediate-level representation of the person’s trajectory and activities.
A challenge is to recognize the threat as soon as possible, while in the midst of many other people who pose no threat. The cue for the threat will be more explicit and distinctive at a later stage, while the early cues may be less distinctive. At the beginning of a threat, the behaviour may look very similar to the behaviour of other people, e.g. just loitering is not really suspicious. Our objective has been to distinguish between threats and normal behaviour, as soon as possible while the threat is building up, ideally from the moment that the person who is posing the threat starts to show the first cues with acceptable false alarm rate.
WP7 has proposed a threat detection system based on an intermediate-level representation that captures semantic descriptions including a person’s trajectory, activities and presence in particular zones. This representation, as described in D7.3 can be constructed from simple, low-level features, such as automated, imperfect tracks and common, localized motion features. The activities and trajectories are very important elements of the representation. The representation consists of descriptions that have a meaning, like ‘a person was present in the car park area’ and then ‘loiters around the truck area’. Such descriptions may help the operator to assess the threat. Further, it helps to get insight into system’s decisions and errors. The latter is important for fine-tuning the system for optimal performance. The proposed system reliably detects threats. In 23 challenging videos, the average accuracy is 85.1% with just one-second track intervals. This is a very reasonable performance, given the small temporal extent of the analysis. When the total track is available for analysis, the performance is very good: the threat detection accuracy increases to 96.6%.
The applicability of the proposed framework for threat detection is very wide. We argue that the approach can be generalized to various types of threats posed by people to various types of objects and people. A wide set of human behaviors can be recognized by this approach, as generic motion features are used to learn the patterns of the actions, given labelled examples of the actions of interest, which have to be provided by an expert. For simple behaviors this is sufficient. For more complex behaviors, which involve items (a left bag, a gun, etc.), the feature set needs to be extended with item detection and interaction with items. The threat detection is currently based on persons moving through a scene. To recognize threats posed by vehicles, other threat features need to be selected, such as object properties about size, weight, type, antecedents, etc. This requires different object features, as human actions are no longer applicable. The strategy for threat detection in the ARENA project can be easily adapted to use different features, combined with features that were already included in the ARENA approach, i.e. kinematics and zones.
WP7 has also developed the ARENA Human Machine Interface (HMI). Two versions of the HMI have been developed, one to be installed on a desktop and one to be installed on a smartphone.
The desktop HMI is built according to a modular design. This allows the switching on and off of components to cater for different users’ needs.
The HMI for the truck scenario consist of the following modules:
• An Object Browser for displaying properties of objects.
• A Situation Browser for displaying objects as part of a situation and relationships between them.
• A Threat Browser for displaying information about detected threats
• A Playback Engine which will simulate the real-time behaviour of the system for testing and demonstration purposes.
• A Database Manager which allows users to connect/disconnect to the databases, create/delete, and perform a variety of manipulations on them such as querying them.
• A Truck Status Monitor which notifies the ARENA system operator about any newly detected threats, providing information about the threats such as confidence levels and the date and time that occurred; allows dismissal of threats.
• A Multi-camera window which displays streaming of all the cameras attached to the truck, accompanied with related annotations. In additions it shows the driver latest detected position as well as a history of the latest alarms.
• A Single Camera window which displays the streaming of only one camera with the ability to switch between them, accompanied with annotations.
• A Map View screen which displays the area around the truck from a bird’s eye view. It displays object detections (colour coded depending on the status normal/suspicious/criminal) and their tracks.
• A Threat Thresholds Manager which allows the configuration of thresholds for the incoming threats on the system. This includes picking ranges for warning (suspicious behaviour), alarms (high probability of criminal behaviour) and also a cutting limit to ignore threat with low confidence levels.
The mobile HMI are developed to be installed on a smartphone or tablet that is used by the truck driver. Its main purpose is to alert the driver of potential threats while he/she is away from the vehicle. Therefore, as a minimum, the on-board system will be able to wirelessly send notifications to the handheld device, which will then display details of the detected threat.
Two levels of severity are used:
• Warnings are broadcast if a suspicious behaviour (e.g. loitering near the truck) has been detected;
• Alarms are sent if ongoing criminal behaviour (e.g. breaking into truck) is detected.
On receipt of a warning, the driver has two options:
• Dismiss the warning after establishing that there is no threat. In this case, the warning is logged at the headquarters, but not flagged up as a warning or alarm.
• Escalate to the headquarters if assistance is required. This triggers an alarm at the headquarters.
Finally, the HMI also provides a kind of “panic button” that allows the driver to request assistance from the headquarters.
WP7 was completed in 2014 with three deliverables
• D7.1 Threat reasoning engine
• D7.2 Human Machine Interface
• D7.3 Threat reasoning engine (Final results)
2.7. WP8 Evaluation and Demonstration
Based on the WP2 use cases, this work package has defined several test cases to demonstrate end users required capabilities. The testbed developed in WP4 has been used as demonstration platform and a live demo was presented at the end of the project.
The objectives of the final demonstration were:
• To prove the ARENA concept on the truck case in a real life demonstration
• To demonstrate the proper functioning of the integrated test bed
• To illustrate the ARENA concept on the maritime case
The demonstration was performed by the scenario ”truck parked in a parking lot”.
The demonstration was planned in two phases:
• First, real scenarios were played around an ”ARENA equipped” real truck. Three scenarios were played, representative of typical situations: a normal situation (normal movements of the users of a parking lot), a potential criminal situation (a person is coming and staying near the truck), and a criminal situation (someone attacks the truck drivers). All data were recorded.
• Then differed data analysis was performed on the test bed and the results presented on the HMI.
A truck was equipped with the ARENA setup:
• Cameras were fixed externally on the truck structure on small steerable support
• Computer ”Arena Node” was fixed over a rack inside the truck.
The following cameras were used at the final demo:
Visible and near infrared Basler BIP2
Sagem micro camera Uncooled infrared
To demonstrate the performance, three scenarios were selected, representing three typical situations of the truck use case.
The Truck driver exits the truck and goes to the restaurant. He waits there.
One person exits from the parked car and goes to the smoking area.
A second person exits the parked car and goes to the smoking area.
The two persons return to the car one following the other.
The truck driver returns to the truck.
The expected behaviour of the Arena system: persons are detected and tracked. No alert on threat is generated.
Potential criminal situation
The Truck driver exits the truck and goes to the restaurant. He waits there.
One person walks from the smoking area to the truck, stops at the truck and loiters at the truck.
Looks through the truck window and then move on. He exits the parking lot.
The truck driver returns to the truck.
The expected behaviour of Arena system was: persons are detected and tracked. An alert is generated when the event “loitering around the truck” is detected;
The Truck driver exits the truck and goes to restaurant. He waits there.
One person exits the parked car and walks to the smoking area and waits there.
Another person exits the car and stands at the car waiting.
The truck driver returns to his truck. Half way, the person standing at the car comes to meet the driver and stops him. From behind the person at the smoking area comes running and attacks the driver. The driver is hitting by the two persons and brought to floor. The two attackers leave the scene. Driver remains on floor.
The expected behaviour of ARENA system: persons are detected and tracked.
Alerts could be generated on recognition of the actions:
• Attackers waiting
• Attacker running towards driver
• Group forming
Driver face recognition
In order to allow attendees involvement, the demonstration was conducted in the showroom. People were sitting in front of the camera. The software was trained to detect the truck driver face.
Someone from the audience having some resemblance with the driver came to test the software. He was recognized as not being the driver.
Results presented on the HMI.
The ARENA HMI presented the results continuously to give the audience a picture of what can be presented by an ARENA system.
When, someone tries to steal the truck, the real-time face recognition detects it and launches an internal alert. A picture with orange border appears. After 2 min of suspected thief status, without any authorized driver status reported, the real-time face recognition launches an alarm to the ARENA central server. A picture with red border appears.
WP8 was completed in 2014 with two deliverables
• D8.1 Experiment and Demonstration plan
• D8.2 Report on final demonstration
2.8. WP9 Dissemination and standardization
The ARENA project has put a lot of effort in dissemination. Several conference papers have been produced.
The project subcontracted the PR agency Emmett & Smith in February 2014 with the task to write
and disseminate News Releases and make invitations to the final demo. The project also participated with an exhibition booth and a poster at the Transport Research Arena (TRA) conference 14 - 17 April 2014 in Paris.
During the final demonstration was the project subject of a film set. The film is uploaded on the ARENA web page.
Besides the dissemination, the work package also considered privacy and legal aspects. The ARENA system shall operate in public areas in different countries. This places demands on the functionality and the system solution. To avoid doubts, all work packages have considered the privacy aspect. The report D9.4 Privacy and legal aspects summarizes some legal implications and gives guidance on how to deal with the privacy implications.
The ARENA web page can be found on: https://www. ARENA-fp7.eu
• “Recognition of Long-Term Behaviors by Parsing Sequences of Short-Term Actions with a Stochastic Regular Grammar”, Gerard Sanromà, Gertjan Burghouts, and Klamer Schutte, SSPR, Japan, 2012
• Andersson, M., ”Sensor architecture for the recognition of threats to mobile assets “, Presentation at the National Symposium on Technology and Methodology for Security and Crisis Management (TAMSEC), 14 November, 2012.
• Andersson, M., Patino, L., Burghouts, G., Flizikowski, A., Evans, M., Gustafsson, D., Petersson, H., Schutte, K., Ferryman, J., "Activity recognition and localization on a truck parking lot", The 10th IEEE International Conference on Advanced Video and Signal-based Surveillance (AVSS 2013), Krakow, 27-30 August, pp. 263-269, 2013.
• Johansson, R., Andersson, M., “Activity and threat recognition for commercial transport”, National Symposium on Technology and Methodology for Security and Crisis Management, TAMSEC 2013, Stockholm, 13 – 14 November, 2013.
• Burghouts G.J. Schutte, K., “Spatio-Temporal Layout of Human Actions for Improved Bag-of-Words Action Detection”, Pattern Recognition Letters (2013).
• Burghouts, G.J. Schutte, K., Bouma, H., den Hollander, R.J.M. “Selection of Negative Samples and Two-Stage Combination of Multiple Features for Action Detection in Thousands of Videos”, Machine Vision and Applications (2013).
• Bouma, H., Burghouts, G.J. de Penning, L., et al., “Recognition and localization of relevant human behavior in videos”, Proc. SPIE 8711 (2013).
• Ferryman, J., Ellis, A-L., Performance Evaluation of Crowd Image Analysis using the PETS2009 Dataset. Pattern Recognition Letters, 2014 doi: 10.1016/j.patrec.2014.01.005
• Ellis, A-L., Ferryman, J., Biologically-Inspired Robust Motion Segmentation using Mutual Information. Computer Vision and Image Understanding, 2014 doi: j.cviu.2014.01.009 Burghouts, G.J. Schutte, K., ten Hove, R.J-M. van den Broek, S.P. Baan, J., Rajadell, O., van Huis, J., van Rest, P., Hanckmann, H., Bouma, H., Sanromà, G., Evans, M., Ferryman, J., “Instantaneous Threat Detection based on a Semantic Representation of Activities, Zones and Trajectories”, Signal, Image and Video Processing, May 2014 20
• Sanromà, G., Patino, L., Burghouts, G., Schutte, K., Ferryman, J., “A unified approach to the recognition of complex actions from sequences of zone-crossings”, Image and Vision Computing, vol 32, p 363 - 378, May 2014
News Releases have been published 3- 11 March 2014 in the following publications:
• Ship Management International
• Materials Handling World
• Transport News
• Professional Security Magazine
• Engineering News
• Marine Link
• Maritime Executive
• HGV Ireland
• Maritime Security News
• Marine Electronics
• Maritime Global News
• Marine Technology News
Project Newsletters have been distributed to ARENA Stakeholders and on the external web in
February 2013, October 2013 and August 2014.
WP9 was completed in 2014 with four deliverables
• D9.1 Web Portal
• D9.2 Exploitation Plan
• D9.3 Standardization report
• D9.4 Privacy and Legal aspects
In recent years there have been a number of incidents where terror organisations have caused disruption to mass transportation networks and other areas of critical infrastructure. A very real threat is that the same or other terror organisations will seek to disrupt the transit of (or to destroy or capture vehicles containing) hazardous or dangerous materials (e.g. chemical liquids or gas), including radioactive (nuclear) material, or simply vehicles of huge economic value. Considering the latter situation, while piracy has existed as a threat to international shipping since the beginning of the Somali Civil War in the early 1990s, there has recently been an upsurge in the number of occurrences posing a very real threat to critical maritime infrastructure. Hijacking causes large problems for the shipping companies as well as directly affecting the perception of safety for the personnel onboard vessels. It can even be a threat to the economy of individual EU member states.
The land transportations with trucks are exposed to criminal activities to a large extent over whole Europe. This is a problem especially during the nights when the trucks stop along the road for a rest. During the night there is a high risk that the truck and truck driver will be attacked and robbed.
The authorities responsible for security of critical mobile assets in the EU have expressed a keen interest in novel technologies that will conduct automated or semi-automated sensor data processing leading to threat detection. These new technologies will support a wide range of security personnel in the surveillance of these assets both land- and maritime based.
The ARENA project has developed a surveillance concept for mobile platforms that meet the above requirements. The system can be deployed directly onto a mobile asset in a wide range of large, unpredictable environments (both land- and sea-based). The flexible and adaptable architecture enables early warnings capacity and autonomous situational awareness.
The ARENA concept has been demonstrated in a final demonstration. For this demonstration, three scenarios were implemented around a truck equipped with video sensors. The scenarios showed increasing threat behavior from normal behavior to potential criminal behavior to criminal behavior:
• First scenario: behavior without malice with people walking around the truck,
• Second scenario simulated suspicious behavior with people moving towards and ending near the truck,
• Third scenario: simulation of aggressive behavior with people taking to the driver to steal the keys.
For each scenario, the ARENA concept demonstrator was capable of detecting people moving around the truck. After processing the ARENA demonstrator was able to monitor and analyze all their behaviors correctly.
An additional demonstration was conducted with Morpho in the meeting room, based on the driver's facial recognition simulating the authorization of the start of the truck only by authorized personnel.
The results have also been disseminated at the Transport Research Arena (TRA) exhibition which took place in Paris 14 - 17 May 2014 and was organized by the French Institute of Science and Technology for Transport, Development and Networks (IFSTTAR). The aim with TRA 2014 was to bring together research communities across all surface transport modes. The theme of this fifth edition was “Transport solutions: from Research to Deployment”. www.traconference.eu
About 2500 people visited the conference and exhibition during the four days.
The project had a booth in the exhibition hall were the ARENA concept was presented. Demos on truck scenarios and sea scenarios were shown on screens and partners were present to discuss the ARENA concept.
The audience was informed about the possibility to get more information on the project if they left a business card in a box. At the end of the conference had 22 persons left their business card in the box. The people represented industry, academia and public organizations. In addition to the exhibition booth was the project results disseminated through a poster.
The PR agency Emmett & Smith Ltd was subcontracted to disseminate the project results via press releases. They were also responsible for the invitation of journalists to the final demo.
The project and the final results are presented in a film production. The main purpose of the film is to increase awareness and highlight results of the ARENA project, specifically achievements for protection of mobile critical assets (trucks, ships) as critical infrastructure. The film can be found on the ARENA web site: https://www. ARENA-fp7.eu
Conference Papers and Newsletters
In addition to the above dissemination activities has the project produced several conference papers and newsletters. The conference papers are listed in chapter 2.8 and the newsletters can be found in the ARENA web site: https://www. ARENA-fp7.eu
The project public website
ARENA public website is located at www. ARENA-fp7.eu and provides a description of the project, the film and a brief presentation of the consortium. Furthermore the main results/foreground is presented.
To create a strong image of the project easy to recognize, a logo was early designed. This logo was used for all ARENA reports and presentations.
Contact information to the Coordinator:
FOI (Swedish Defence Research Agency)
58111 Linköping, Sweden
Tel: +46 13 378 084
Fax: +46 8 555 031 00
Grant agreement ID: 261658
16 May 2011
15 May 2014
€ 4 861 867,60
€ 3 178 761
Deliverables not available
Grant agreement ID: 261658
16 May 2011
15 May 2014
€ 4 861 867,60
€ 3 178 761
Grant agreement ID: 261658
16 May 2011
15 May 2014
€ 4 861 867,60
€ 3 178 761