Periodic Reporting for period 1 - SAMPDE (Sample complexity for inverse problems in PDE)
Berichtszeitraum: 2022-11-01 bis 2025-04-30
Current mathematical theories often assume an infinite number of measurements, which contrasts sharply with real-world scenarios where only a finite—and often small—set of measurements is available. This gap is significant, as the limited number of measurements directly affects decisions about how data is collected, what assumptions (priors) are made about the unknown quantity, and the reconstruction methods used. Many promising imaging techniques remain underutilized because their reconstructions suffer from poor quality.
To address this, the project will integrate methods from PDE theory, numerical analysis, signal processing, compressed sensing, and machine learning. By combining these disciplines, it aims to construct a new theory of sample complexity tailored to PDE-based inverse problems. This work will provide a mathematical foundation for inverse problems that better reflects practical constraints, guiding the selection of measurements, priors, and reconstruction algorithms. Ultimately, this research will enhance the feasibility and effectiveness of emerging imaging technologies, bringing them closer to practical application.
In addition, the project is expected to generate novel results in compressed sensing, extending its applicability to a wider range of problems, including nonlinear and ill-posed scenarios.
A key aspect of the investigation is the interplay between the intrinsic infinite dimensionality of the models under consideration, and of the signals to be reconstructed, and the need of finite measurements. This truncation is especially delicate when dealing with an ill-posed problem, and has to be considered with care.
While compressed sensing techniques are based on deterministic prior knowledge about the signals to be reconstructed, machine learning methods are data-driven and have become very popular in recent years. In this project, we have obtained results regarding the theoretical aspects of learning the optimal regularizers for inverse problems, as well as on designing generative models in function spaces based on constructing neural networks as continuous convolution operators, and on manifold learning for manifolds with non-trivial topologies.
These theoretical insights are complemented by several numerical results. The main common outcome is the superiority of the methods that combine a model-based approach with machine learning, as an effective tool for considering priors that are well adapted to the class of signals under consideration.
The work on the continuous generative neural networks provides the first architecture of a generative model in function spaces. The current construction is based on a multi resolution analysis, and is not ideal for capturing and generating discontinuous signals. We are now investigating an alternative approach based on pseudo-differential operators, in order to overcome this issue.