European Commission logo
italiano italiano
CORDIS - Risultati della ricerca dell’UE
CORDIS

Majoration-Minimization algorithms for Image Processing

Periodic Reporting for period 2 - MAJORIS (Majoration-Minimization algorithms for Image Processing)

Periodo di rendicontazione: 2021-07-01 al 2022-12-31

Constantly improving signal acquisition devices, from spectrometry to medical imaging machines, impose to work with increasingly large and complex data. From the mathematical perspective, this implies the need to solve large-scale optimization problems involving many variables. The MAJORIS project aims to propose a new generation of optimization algorithms, relying on the key concept of majorization-minimization (MM), that remain efficient in the context of “big data” processing, by tackling several challenging questions regarding algorithm design (acceleration strategies), convergence analysis (non-convex costs, inexact schemes), and practical implementation (usage of massively parallel and/or distributed architectures). The methodological outcomes of MAJORIS are expected to yield a set of novel methods for the efficient processing of large amounts of data and variables.
Work-Package 1: Accelerated MM approaches
Modern 3D image recovery problems require powerful optimization frameworks to handle high dimensionality while providing reliable numerical solutions in a reasonable time. In this perspective, asynchronous parallel optimization algorithms have received an increasing attention by overcoming memory limitation issues and communication bottlenecks. In [R1], we proposed a block distributed MM algorithm for solving large scale non-convex differentiable optimization problems. Assuming a distributed memory environment, the algorithm casts the efficient MM memory gradient scheme into smaller dimension subproblems where blocks of variables are addressed in an asynchronous manner. Convergence results are established under mild assumptions. Numerical experiments show the great scalability potential of our method.

Work-Package 2: Robust MM approaches

The proximal gradient algorithm is a popular iterative algorithm to deal with penalized least-squares minimization problems that falls within the MM framework. Its simplicity and versatility allow one to embed nonsmooth penalties efficiently. In the context of inverse problems arising in image processing, a major concern lies in the computational burden when implementing optimization solvers. For instance, in tomographic image reconstruction, a bottleneck is the cost for applying the forward linear operator and its adjoint. Consequently, it often happens that these operators are approximated numerically, so that the adjoint property is no longer fulfilled. In [R2,R3], we focused on the proximal gradient algorithm stability properties under such an adjoint mismatch. By making use of tools from convex analysis and fixed point theory, we established conditions under which the algorithm can still converge to a fixed point. We provided bounds on the error between this point and the solution to the minimization problem. We illustrated the applicability of our theoretical results on practical computed tomography problems.

Work-Package 3 – Flexible MM approaches

Many data science problems can be efficiently addressed by minimizing a cost function subject to various constraints. In [R4], we proposed a new MM method for solving large scale constrained differentiable optimization problems. To account efficiently for a wide range of constraints, our approach embedded a subspace algorithm into an exterior penalty framework. The subspace strategy, combined with a local MM step search, took great advantage of the smoothness of the penalized cost function. The convergence of our algorithm was proved. Numerical experiments carried out on large-scale image restoration applications showed that our proposed method outperformed state-of-the-art algorithms.

[R1] M. Chalvidal and E. Chouzenoux. Block Distributed 3MG Algorithm and its Application to 3D Image Restoration. In Proceedings of the 27th IEEE International Conference on Image Processing (ICIP 2020), Virtual Conference, October 25-28 2020.
[R2] E. Chouzenoux, J.C. Pesquet, C. Riddell, M. Savanier and Y. Trousset. Convergence of Proximal Gradient Algorithm in the Presence of Adjoint Mismatch. Inverse Problems, vol. 37, no. 6, pp. 065009, 2021.
[R3] M. Savanier, E. Chouzenoux, J.-C. Pesquet and C. Riddel. Unmatched Preconditioning of the Proximal Gradient Algorithm. IEEE Signal Processing Letters, vol. 29, pp. 1122-1126, 2022.
[R4] E. Chouzenoux, S. Martin and J.-C. Pesquet. A Local MM Subspace Method for Solving Constrained Variational Problems in Image Recovery. To appear in Journal of Mathematical Imaging and Vision, 2022.
Progress beyond state of the art:

Work-Package 1: Accelerated MM approaches

When the problem size becomes increasingly large, running MM algorithm becomes difficult, due to memory limitation issues. Parallel implementations of MM schemes have been devised, where the block updates are performed simultaneously, allowing to distribute computations on different nodes (or machines). Implementation on parallel architecture requires to pay attention to communication cost. The latter can be reduced by resorting to an asynchronous parallel implementation. In the context of MM algorithms, although the need for distributed implementation strategies is crucial, few results were available so far regarding theoretical convergence guarantees. Our work [R1] is the first to derive convergence guarantees for a distributed version of the MM memory gradient method in the challenging non-convex setting.


Work-Package 2: Robust MM approaches

It is well assessed in the litterature that approximation of the adjoint operator occurs often in large scale tomographic imaging, as practiced in industrial non-destructive testing and diagnostic medical imaging. Recently, several authors have investigated convergence conditions for specific forms of proximal gradient algorithm in the presence of such adjoint mismatch and the impact over the asymptotic solution. These works focus on the case when the cost function is a least-squares term without any regularization. However, up to our knowledge, our works [R2,R3] were the first to study the effect of an adjoint mismatch on the proximal gradient convergence, in the case of a more generic prior associated with a nonlinear proximal operator, in infinite dimension.

Work-Package 3: Flexible MM approaches

Many challenges in image processing can be addressed by solving constrained optimization problems. These problems may be difficult to solve numerically in reasonable times because of their high dimension and of
the involved constraints. These constraints may play diverse and crucial roles as they enforce prior knowledge about the solution. Among available methods for large scale settings, exterior penalty methods assign a nonnegative cost to every point not satisfying the constraints. The main drawback is ill-conditioning for large values of the penalty parameter. To tackle this challenge, we consider local surrogate functions that majorize the cost function within a small domain around the current iterate, leading to a trust-region like technique. Up to our knowledge, [R4] is the first work that combines MM strategy within the exterior penalty framework. Moreover, up to our knowledge, our convergence analysis for an MM algorithm involving only local majorizing surrogates is novel.


Expected results until the end of the project :

Through the MAJORIS project, we plan to explore three new paradigms that push the frontiers of traditional MM algorithms:
Objective 1: Our first objective is the development of MM algorithms capable of handling very large number of variables (e.g. 3D images with more than 109 voxels) in a reasonable time (e.g. at most few minutes for reconstructing a volume from computed tomography data) and with the ability to leverage the intrinsic acceleration and flexibility provided by recent computing architecture equipped with multicores or graphical processing units (we expect an acceleration rate linearly proportional to the number of units).
Objective 2: Our second objective is to improve the robustness of MM algorithms. This requires theoretically asserting their effective reliability in real-world contexts, characterized by modeling errors, limited accessibility to the whole dataset, and eventual communication delays or flaws when implemented in a distributed manner.
Objective 3: Our third objective is to extend the applicability of the MM framework to a more general class of problems than have been so far considered. We will consider the minimization of more general cost functions (e.g. involving constraints, heterogeneous variable spaces, non-convex/non-linear coupling terms). We plan an extension the MM philosophy beyond standard optimization problems to provide fast solutions for saddle point problems and Bayesian approximation/simulation methods
Illustration of the result of publication [R2].