Skip to main content
Aller à la page d’accueil de la Commission européenne (s’ouvre dans une nouvelle fenêtre)
français français
CORDIS - Résultats de la recherche de l’UE
CORDIS

Dataset and dehazing methods for non-homogeneous and dense hazy scenes

Periodic Reporting for period 1 - NH-DEHAZE (Dataset and dehazing methods for non-homogeneous and dense hazy scenes)

Période du rapport: 2020-10-01 au 2022-09-30

The main objective of this project is to design image dehazing but also image interpretation framework that are robust to haze, including the challenging cases where the sources of light and impairment are non-uniformly distributed over the scene.

In presence of haze, small floating particles absorb and scatter the light from its propagation direction. The problem worsens in case of poor illumination conditions, as encountered during the night, where artificial lightning becomes non-uniform and biased in terms of spectral distribution. Besides, most computer vision and image processing algorithms (e.g. from feature extraction to objects/scene detection and recognition) usually assume that the input image is the scene radiance (haze-free image), and therefore strongly suffer from the color-shift, and low-contrast induced by hazy conditions.

The overall objectives of the project are:
O1. Real hazy/dehazed image data-base, and optical model design
O2. Deep-learning image dehazing
O3. Deep-learning image interpretation
For the first objective objective we introduced 2 image dehazing datasets: NH-Haze2 and DNH-HAZE datasets. The NH-Haze2 (Non-Homogeneous Dehazing Dataset) consists of 35 hazy images and their corresponding ground truth (haze-free) images of the same scene. NH-Haze2 contains real outdoor scenes with nonhomogeneous haze generated using a professional haze setup. For recording images we used Sony A7 III cameras remotely controlled. Our dataset allows to investigate the contribution of the haze over the scene visibility by analysing the scene objects radiance starting from the camera proximity to a maximum distance of 20-30m. The DNH-HAZE (Dense and Non-Homogeneous Dehazing Dataset) dataset is an extension of the previous dataset and comprises 50 hazy images along with their corresponding ground truth (haze-free) images depicting the same scenes.
We also introduced an image prior to improve visibility in images affected by haze (image dehazing).

For the second objective inspired by the recent results published in our challenge reports , as well as the methods analyzed in the previous phase, we developed a novel CNN deep learning-based method composed of several modules detailed in the final report. Moreover, in order to assess the adequacy of deriving a dehazing loss function from some of those metrics, two dehazing network architectures have been selected: AOD-Net is an important milestone in the use of DCP, as it successfully simplified the atmospheric scattering equation and UVM-Net that utilizes Selective State Space Models implemented in the popular MAMBA module, with performance similar to and in some cases exceeding that of transformers in speed and memory, for a wide variety of tasks .

For the last objective we first analyze the impact of image dehazing for local image feature matching. Image dehazing enhances the reliability of local feature matching, which is critical for vision tasks like detection and recognition. Haze reduces contrast and detail, impairing algorithms such as SIFT. In our study, we used SIFT to evaluate dehazing performance on the NH-HAZE2 dataset by measuring correct feature matches between dehazed and ground-truth images. This metric reflects structural similarity and dehazing quality. Across five test image pairs, our deep learning-based method consistently ranked second among evaluated techniques. Moreover, we conduct a comprehensive analysis and interpretation of hazy satellite images. We introduce a novel deep learning model in this study. Our method addresses atmospheric degradation in satellite imagery by treating image dehazing as a modality translation task. Instead of modifying interpretation models to handle hazy inputs, we use an image-to-image CNN to convert degraded images into clean-like representations compatible with existing models. This task-aware approach leverages interpretation accuracy as a supervision signal, aligning restoration with semantic understanding. The result is a robust transformer-based framework that effectively bridges hazy and clean domains, improving both visual quality and downstream model compatibility.
We introduced 2 novel datasets for image dehazing. The NH-Haze2 and DNH-HAZE datasets have been used to NTIRE CVPR image dehazing challenges to gauge the state-of-the-art in image dehazing.
Related to our new image dehazing prior the qualitative and quantitative evaluations demonstrate that our approach yields better results than previous physically-based image dehazing techniques, and favourably compares with the deep learning dehazing approaches.


Moreover, building on the recent results presented in our NTIRE challenge reports and the methods explored in the previous phase, we designed a novel deep learning approach based on Convolutional Neural Networks (CNNs) for image dehazing. We conducted a comprehensive study to evaluate the effectiveness of various loss functions for image dehazing. We selected two representative models with distinct architectures and design philosophies: AOD-Net and UVM-Net.

We present a study on evaluating image dehazing methods through feature-level analysis using the SIFT operator, motivated by the premise that dehazing enhances local feature visibility and consistency—key for improving CNN robustness in real-world hazy conditions. Using the NH-HAZE2 dataset, we assessed several dehazing techniques, including our CNN-based approach, by measuring the number of correct SIFT feature matches between dehazed outputs and corresponding ground truth images.
We propose a multimodal learning approach to image dehazing, framing the task as a modality translation problem that transforms hazy inputs into representations interpretable by models trained on clean data. Rather than adapting interpretation models to degraded inputs, our method uses an image-to-image CNN supervised by interpretation accuracy to align semantic understanding with visual restoration.


This project will have a significant long-term impact on both the researcher’s career and the academic ecosystem. By engaging with cutting-edge research in deep learning and inverse problems at a leading host institution, the researcher has gained in advanced scientific knowledge and interdisciplinary expertise that perfectly complement his academic profile. Exposure to ongoing high-level projects not only deepens his technical competence but also enhance his understanding of how different research domains interconnect. Upon returning to UPT for a full-time tenure-track position, the researcher will be well-positioned to introduce novel research directions, foster international collaborations, and enrich the curriculum with state-of-the-art content, thereby contributing to the strategic development of his home institution in the field of AI and image processing.

The innovation capacity of the host institution was significantly enhanced through the multifaceted achievements of this project, which advanced both foundational research and applied methodologies in image dehazing. For instance, the development and public release of two high-quality image dehazing datasets have positioned the host as a key contributor to global benchmarking efforts, with both datasets now integrated into the NTIRE CVPR challenges. These resources not only support the broader research community but also strengthen the institution's visibility in the computer vision community
An example of image pairs from DNH-HAZE dataset
Mon livret 0 0