European Commission logo
français français
CORDIS - Résultats de la recherche de l’UE
CORDIS

Smart Cabin System for cabin readiness COVID Amendment

Periodic Reporting for period 2 - SmaCS (Smart Cabin System for cabin readiness COVID Amendment)

Période du rapport: 2021-01-01 au 2022-07-31

The global ambition of SmaCS was to conceive a camera-based prototype solution (TRL5), validated in a relevant environment in the CleanSky2 Integrated Cabin Demonstrator, for digitalized on-demand verification of TTL requirements for cabin luggage. It was designed to be highly reliable, cost-effective, and easily upgraded with additional camera-based TTL cabin requirements verification functionalities. To fulfill this ambition, the consortium offered a disruptive approach based on a Machine Learning algorithm:

This main ambition of the project was composed of 6 objectives related to the proposed work plan :
Obj. 1 To define an innovative architecture for the solution to be reliable, lightweight, cost-efficient and regulatory compliant
Obj. 2 To select the triptych “data treatment platform, camera platform, AI-based data treatment software”, compliant with the specifications and delivering the best trade-off between reliability, service quality, cost efficiency, and lightweight
Obj. 3 To preliminarily design the complete system according to the specifications
Obj. 4 To develop and validate in laboratory a first complete prototype meeting the functionalities and performances defined in the specifications
Obj. 5 To develop a final prototype and test it in a relevant environment, into the CleanSky’s Cabin Demonstrator
Obj. 6 To disseminate the project results to both the scientific community and the end-users and confirm the conditions for a successful business model

The system consisted of both hardware and software. Integration objectives focused to demonstrate the potential real integration of the technology and development line into a real aircraft by its potential compliance with the aviation normative or at least study the path with the actions and costs to be performed to achieve that objective.
The successful development of the project required the synergy between the industry and the research communities´ capabilities, seeking a smart balance between the new cutting-edge technological solutions and the know-how of the industrial players.
. The main results are the following:

Result 1: A TRL5 smart camera-based solution for cabin crew efficiency and flight safety
The created TRL5 system contains two types of models for the AI component’s “brain”: (1) a DNN model that is used for the extraction of descriptive feature vectors from the mentioned image regions (cropped and pre-processed), and (2) a set of manifold-learning-based discriminant space models, used for further processing those feature vectors for the TTL-condition classifications stage. The former is trained “out-of-the-box”, i.e. it is deployed already trained for every installation and it is not expected to be modified/readjusted, “on-site”. The latter can be modified more easily (i.e. “on-site”), as it can be quickly retrained with the computational resources available in the AI-processor. The system is patent pending.

Result 2: Synthetic 3D simulations for camera integration and photorealistic data generation
We have built a tool that allows visualizing the 3D models within the cabin as if they were being observed from the cameras to be installed, simulating the kind of images that would be captured due to their characteristics along with the selected position, orientation, illumination conditions, etc. The methodology to create this kind of tool was published (https://zenodo.org/record/4548650#.Y77UcBXMJD8).

Result 3: Large-scale dataset for training image content descriptors in the context of airplane cabins
To collect the necessary data, a cabin mockup was built, and synthetic data was generated using 3D graphics. The mockup was illuminated from three possible light sources: natural light from the room's windows, artificial light on the ceiling, and a spotlight beside the cabin window to mimic directional sunlight. We published part of the dataset (https://zenodo.org/record/7524808#.Y77M7xXMJD8) to support research on object detection and scene understanding, specifically related to identifying the proper positioning of cabin luggage during taxi, take-off, and landing (TTL) operations.

Result 4: Synthetic-to-real domain adaptation algorithm for augmented dataset generation
We developed a methodology to exploit the gathered synthetic and real data to train more effectively the AI component. The methodology was published (https://zenodo.org/record/7282478#.Y77UNRXMJD8).

Result 5: Image content descriptor algorithm for optimal detection of cabin components, luggage, subjects, and their visual relations
We developed a methodology to analyze images and describe their content by examining the relationships between various factors, including the presence or absence of specific elements, such as people or objects, and the positioning of these elements within the image. This methodology is currently under review for publication.

Result 6: Optimization techniques for efficient inference of deep neural network models
We developed a methodology to optimally deploy the image content descriptor algorithm onboard, processing the image streams captured by the required number of cameras to cover all the cabin areas. This methodology is currently under review for publication.

the tool we refer to under Result 2 is a tool with 3D models within the cabin; while in Result 6 we refer to a methodology to optimally deploy the image content descriptor algorithm onboard.

The dissemination activities were conducted during COVID 19 pandemic period so we haven't been able to achieved our main KPI.
We have achieved and validated all the TRL evaluations planned during the project, reaching the final TRL5 goal. TRL3 was validated on November 2020, TRL4 on May 2021, and TRL5 on June 2022.

The impacts of the project are wide, on one side artificial Intelligence(AI) has strongly gained momentum due to the considerable advances obtained by machine learning techniques, and in particular Deep Neural Networks. Presently DNN's constitute the basis for the most advanced computer vision and machine learning methodologies. On the other side in-cabin environment cameras are characterized by restrained video and image analysis capabilities and are not conceived for specific purposes such as taxi take-off and landing (TTL) cabin readiness verification, and despite this, the captured images are not being fully exploited. AI in the aviation industry, as well as in the industry in general is changing the way the companies approach and use their data, using this data to improve operational efficiency and reduce cost, for example.

Already most of the biggest aviation companies are capitalizing their data, to detect anomalies and make decisions, also to enable the next generation of aerial vehicles on autonomous flight, SmaCS contributes to this next generation aerial vehicles, not only for their application in verifying TTL conditions but also opening the use of the cameras developed and the processor unit calculator for many other applications. We cite for instance some examples of companies that try to enter the market of calculators with AI capabilities; like Curtiss-Wright's defense solutions , kontron or the CTI Sentry .

Nonetheless, it is only SmaCS that integrates a complete solution: processor, software, and cameras designed specifically for aircraft, and hence developed to be compliant with the hardest certification and qualification rules.
Camera position: captured scene from cameras over the seats and over the corridor, respectively.
Overall procedure of the proposed method for automated image description in SmaCS
Camera installation in the cabin mock-up of Vicomtech and Otonomy Aviation, respectively.