Periodic Reporting for period 1 - iSEAu (Intelligent Scene Sensing and Analysis in Underwater Environments)
Période du rapport: 2022-03-01 au 2024-02-29
The main objective of iSEAu is to advance the scene sensing and perception capabilities in underwater environemnts by employing state-of-the-art machine learning methods, and using combinations of conventional (RGB) and non-conventional cameras, like multispectral and single photons cameras (SPCs).
iSEAu aims to bring together the fields of computer vision, machine learning and remote sensing for optimally addressing the underwater visual sensing challenges. The project objectives address this challenge in two
levels. The first concerns the development of methods for reducing the geometric and radiometric distortions introduced by the water through learning-based methods, and the development of methods based on transient imaging for perception in challenging visibility conditions. The second level concerns the adaptation and enhancement to the underwater domain of state-of-the-art methods for image-based extraction of structural and semantic information, and their field-testing considering representative application scenarios.
Overall, the results of the iSEAu project confirm the feasibility and effectiveness of AI-driven underwater sensing technologies. The project has demonstrated significant improvements in underwater imaging quality and automatic underwater scene analysis, offering a robust framework for future advancements in oceanographic research, while opening the way for potential business exploitation of its results. Ultimately, iSEAu delivered advancements in underwater technology, towards improved marine research, industrial applications, and environmental monitoring.
Furthermore, leveraging these advancements, iSEAu focuses on underwater scene analysis. It develops deep learning feature learning and 3D vision adaptations for robust geometric reconstruction, and adapts semantic segmentation methods with a focus on few-shot, weakly, and self-supervised learning to extract detailed semantic information like seabed composition and marine life monitoring. Finally, the project explores neural rendering for immersive 3D scene representation, incorporating multispectral data and addressing data scarcity, thereby advancing computer vision applications in underwater environments.