## Final Report Summary - GESIDICS (Generalized Sampling and Infinite-Dimensional Compressed Sensing)

Applied harmonic analysis is a rapidly growing area of mathematics due to its vast applications in engineering and physics. Arguably one of the most important applications is medical imaging such as Magnetic Resonance Imaging (MRI) or X-ray Tomography (CT) (although these types of techniques are also widely used in archaeology, biology, geophysics, oceanography, materials science, astrophysics, chemical engineering etc.). The mathematical core of MRI is sampling theory, which can be described as follows. An object of interest is hidden behind some physical obstacle. This could for example be a human brain, hidden in a persons skull. To be able to 'see' the brain without opening the skull one can use a physical device (the MRI machine) to 'see' the brain for us. The way the machine does this is by taking certain measurements or samples. These samples are then decoded in a mathematical way that eventually yields an image of the brain. The way this object is sampled and decoded into an image is a part of what one refers to as sampling theory.

The way an MRI machine works is surprisingly mathematical. The object of interest (the image), say the brain, can be modeled as a mathematical function. At every point of the image there is a number describing the color density (this is exactly a mathematical function), and if one knows this function, one knows the image completely. The problem with an MRI machine is that it does not measure or sample the function itself, but rather a transformation of the function, namely the Fourier transform (this is due to the physics behind the machine). The task is therefore to reconstruct the function from this transformation. Due to fundamental results in sampling theory, it is known that if one was able to sample an infinite number of measurements one would get a perfect reconstruction of the function. This is of course impossible in practice, and therefore the task is the following: given a finite number of samples, find the best possible reconstruction. It is this task that is the very core of sampling theory.

This problem is not specifically limited to MRI (nor is this project). MRI is just one application of sampling theory, however, it is well-suited to explaining the core idea. The theory of sampling is used in countless other applications including sound recording, radar surveillance, telecommunications etc.

The purpose of this project is to use new ideas from functional analysis in applied harmonic analysis and, in particular, sampling theory.

In a recent paper a long standing open question in spectral theory (a branch of functional analysis) was finally solved. This was a fundamental problem that had been open for decades.

The purpose of this project is the use of the techniques from this paper to develop a new type of sampling theory that extends the current methodology. We will refer to this approach as generalised sampling. This new theory suggests that one may obtain dramatically improved reconstructions on objects from their sampled values than those currently in practice. In particular, generalised sampling is the theory of signal reconstruction without restrictions on the sampling space or the reconstruction space. An important approach to this problem has been the consistent sampling framework introduced by Aldroubi and Unser and later extended by Eldar. However, there are important cases in which this framework is known to be numerically unstable and non-convergent. What we demonstrate in this project solves these problems. By introducing the notion of a stable sampling rate, we demonstrate how to construct a numerically stable and convergent scheme. Furthermore, the first results of this project show that under certain assumptions, this recent framework is an optimal stable reconstruction scheme.

A key focus of this project is the recovery of wavelet coefficients of compactly supported functions from their Fourier samples. Via analysis of the stable sampling rate, we show that the number of wavelet coefficients which may be accurately and stably recovered grows linearly with the number of Fourier samples. Consequently, up to a constant, the acquisition of Fourier samples has the same effect as that of wavelet samples. Furthermore, the scheme presented is in some sense optimal and any attempt to improve upon the ratio between the number of samples and reconstruction vectors will result in exponential instability.

REMARK

The project started regularly on April 1st, 2012, but soon afterwards (April 26th) the candidate was awarded a Royal Society University Research Fellowship, which forced him to return to Cambridge on Oct 1st (this is because according to the rules of that fellowship the candidate has to start within relatively short time after the award). Therefore the candidate (A.C.Hansen) tried to make the best use of the remaining time, i.e. from May to Sept in Vienna. In this period he worked on (1-2 of the) core topics of the project (as described above), but also started to prepare for the subsequent Royal Society Fellowship in Cambridge, which required a couple of trips to Cambridge in April and June.

During his time at NuHAG he interacted with several members of the host-group, having discussions on problems in sampling theory, wavelet theory and time-frequency analysis. In May 5th he was delivering a seminar talk at NuHAG on the topic 'Can everything be computed? '.

The way an MRI machine works is surprisingly mathematical. The object of interest (the image), say the brain, can be modeled as a mathematical function. At every point of the image there is a number describing the color density (this is exactly a mathematical function), and if one knows this function, one knows the image completely. The problem with an MRI machine is that it does not measure or sample the function itself, but rather a transformation of the function, namely the Fourier transform (this is due to the physics behind the machine). The task is therefore to reconstruct the function from this transformation. Due to fundamental results in sampling theory, it is known that if one was able to sample an infinite number of measurements one would get a perfect reconstruction of the function. This is of course impossible in practice, and therefore the task is the following: given a finite number of samples, find the best possible reconstruction. It is this task that is the very core of sampling theory.

This problem is not specifically limited to MRI (nor is this project). MRI is just one application of sampling theory, however, it is well-suited to explaining the core idea. The theory of sampling is used in countless other applications including sound recording, radar surveillance, telecommunications etc.

The purpose of this project is to use new ideas from functional analysis in applied harmonic analysis and, in particular, sampling theory.

In a recent paper a long standing open question in spectral theory (a branch of functional analysis) was finally solved. This was a fundamental problem that had been open for decades.

The purpose of this project is the use of the techniques from this paper to develop a new type of sampling theory that extends the current methodology. We will refer to this approach as generalised sampling. This new theory suggests that one may obtain dramatically improved reconstructions on objects from their sampled values than those currently in practice. In particular, generalised sampling is the theory of signal reconstruction without restrictions on the sampling space or the reconstruction space. An important approach to this problem has been the consistent sampling framework introduced by Aldroubi and Unser and later extended by Eldar. However, there are important cases in which this framework is known to be numerically unstable and non-convergent. What we demonstrate in this project solves these problems. By introducing the notion of a stable sampling rate, we demonstrate how to construct a numerically stable and convergent scheme. Furthermore, the first results of this project show that under certain assumptions, this recent framework is an optimal stable reconstruction scheme.

A key focus of this project is the recovery of wavelet coefficients of compactly supported functions from their Fourier samples. Via analysis of the stable sampling rate, we show that the number of wavelet coefficients which may be accurately and stably recovered grows linearly with the number of Fourier samples. Consequently, up to a constant, the acquisition of Fourier samples has the same effect as that of wavelet samples. Furthermore, the scheme presented is in some sense optimal and any attempt to improve upon the ratio between the number of samples and reconstruction vectors will result in exponential instability.

REMARK

The project started regularly on April 1st, 2012, but soon afterwards (April 26th) the candidate was awarded a Royal Society University Research Fellowship, which forced him to return to Cambridge on Oct 1st (this is because according to the rules of that fellowship the candidate has to start within relatively short time after the award). Therefore the candidate (A.C.Hansen) tried to make the best use of the remaining time, i.e. from May to Sept in Vienna. In this period he worked on (1-2 of the) core topics of the project (as described above), but also started to prepare for the subsequent Royal Society Fellowship in Cambridge, which required a couple of trips to Cambridge in April and June.

During his time at NuHAG he interacted with several members of the host-group, having discussions on problems in sampling theory, wavelet theory and time-frequency analysis. In May 5th he was delivering a seminar talk at NuHAG on the topic 'Can everything be computed? '.