European Commission logo
français français
CORDIS - Résultats de la recherche de l’UE
CORDIS

Deep-Learning for Multimodal Sensor Fusion

Periodic Reporting for period 1 - DeeperSense (Deep-Learning for Multimodal Sensor Fusion)

Période du rapport: 2021-01-01 au 2022-06-30

DeeperSense addresses key capabilities for cognitive robotic systems, i.e. perception and interpretation of the robot's environment. The main objective of the project is to significantly improve these capabilities by using state-of-the-art methods from artificial intelligence.
The new environment perception capabilties will enhance the performance and reliability of robotic systems, enable new functionality, and thus open up new application areas.
In technical terms, DeeperSense uses Artificial Intelligence (AI) and data-driven Machine Learning (ML) / Deep Learning (DL) to combine the capabilities of multiple sensor modalities for a better and more precise perception of the robot's environment. Artificial Neural Networks (ANNs) connect sensors that use completely different physical principles to probe the space around a robot. When fed with sufficient training data, the ANNs can "learn" how to match and combine the outputs from the different sensors. This way, a blurry sonar image can be interpreted as a sharp camera image, the sighting of an obstacle merely visible in the distance can be confirmed by a sonar reading, or structures and plants on the bottom of the ocean can be reliably classfied, based on low resolution sonar scans that are calibrated using high-definition camera images.
In principle, the DeeperSense concept can be applied to any environment or medium. The underwater domain was chosen to demonstrate and verify the new algorithms, because this is one of the most challenging environments for robots to operate in.

Furthermore, three use cases with a significant societal and environmental relevance, driven by concrete end-user and market needs, were selected:
- monitoring and securing professional divers during the inspection and maintenance of critical infrastructures,
- enabling autonomous underwater vehicles to operate in complex underwater structures such as coral reefs, and
- supporting the high-resolution mapping of the marine sea-floor, including the precise classification of sediments and life forms.

The DeeperSense project team is organized in three groups in Germany, Israel, and Spain. Each group comprises technology providers and end-users, and each group tackles one of
the use-cases described above. However, the groups do share know-how and, even more importantly, training data gathered in numerous lab- and field campaigns. A technical infrastructure
for data and knowledge sharing has been implemented, which will be, at least in parts, be opened to the public in the final months of the project.

The DeeperSense algorithms will be tested and verified in real-world environments in the three participating member states. This includes tests in lakes in Germany, in both the eastern and the western Mediterranean, and in the Red Sea.
A final demonstration is planned to be organized in Lake Starnberg in Germany.
As a first step, the technology-provider / end-user sub-teams in the 3 participating countries (Germany, Spain, Israel) gathered detailed user-requirements for the 3 application use-cases in DeeperSense, i.e. “Diver Monitoring”, “AUV Navigation in Coral Reefs”, and “Seabed Mapping & Classification”.
These requirements were then used to select the tools needed to implement the DeeperSense concept, i.e. meaningful pairings of sensor modalities and artificial neural network (ANN) topologies with the potential to enable the envisioned ML and DL solutions.

Three algorithms dubbed “SONAVision”, “EagleEye” and “SmartSeabottomScan” were identified to cope with the 3 use-cases, and the respective ANNs were implemented and trained.
For the training of the algorithms, data derived in part from legacy data sources and simulation tools, but mainly from extended data collection campaigns in the lab and various field location were used.
In support of the data acquisition campaigns, several tools for data collection had to be modified and new tools had to be conceptualized and built.
By the end of the reporting period, preliminary versions of the 3 algorithms had been implemented and trained, with promising first results.
DeeperSense is expected to go well beyond the current SoA in robotic perception in sub-sea applications by using AI and ML methods to enable “inter-sensoric learning” and a smart combination of data from different (mainly visual and acoustic) sensor modalities.
Expected results are a set of trained and verified algorithms that can be applied to the DeeperSense use-cases. The DeeperSense application partners, i.e. THW in Germany, the Israel National Park Agency in Israel, and Tecnoambiente S.A. in Spain will be able to
use the algorithms to directly improve their operations. This will also include the option to use the DeeperSense concept to empower low-cost sensors with AI-based software solutions.

DeeperSense is focussed on applications in the maritime and underwater domains. However, the DeeperSense concept for enhanced environment perceptions can easily be generalized to other application domains, as the algorithms can be adapted and re-trained to other sensor modalities and application use cases.
Thus the DeeperSense methodology is expected to become the basis for further R&D projects and industry-driven product development both in maritime and terrestrial or space applications.

DeeperSense is committed to an open-data and open-code policy. Thus selections of both the algorithms and the training data collected in the project will be made publicly available.
For this purpose, we will set up online repositories that are linke to and embedded in the respective European research infrastructures.

This will support the communication and dissemination strategy of DeeperSense, which has the objective to support the European robotics and AI communities and thus strengthen European science and technology.