European Commission logo
English English
CORDIS - EU research results
CORDIS

Training Data-driven Experts in OPTimization

Periodic Reporting for period 1 - TraDE-OPT (Training Data-driven Experts in OPTimization)

Reporting period: 2020-06-01 to 2022-05-31

The main goal of TraDE-Opt is the education of 15 experts in optimization for data science, with a solid multidisciplinary background, able to advance the state-of-the-art. This field is fast-developing and its reach on our life is growing both in pervasiveness and impact. The central task in data science is to extract meaningful information from huge amounts of collected observations. Optimization appears as the cornerstone of most of the theoretical and algorithmic methods employed in this area. Recent results in optimization, but also in related areas such as functional analysis, machine learning, signal processing, etc. already provide powerful tools for exploring the mathematical properties of the proposed models and devising effective algorithms. Despite these advances, the nature of the data to be analyzed, that are “big”, heterogeneous, uncertain, or partially observed, still poses challenges and opportunities to modern optimization.
The key aspect of the TraDE-Opt research is the exploitation of structure, in the data, in the model, or in the computational platform, to derive new and more efficient algorithms with guarantees on their computational performance, based on decomposition and incremental/stochastic strategies, allowing parallel and distributed implementations.
Advances in these directions will determine impressive scalability benefits to the class of the considered optimization methods, that will allow the solution of real world problems. To achieve this goal, we are offering an innovative training program, giving a solid technical background combined with employability skills: enterpreneurship, communication, and career planning skills. Integrated training of the fellows takes place at the host institute and by secondments, workshops, and schools. As a result, TraDE-Opt fellows will be prepared for outstanding careers in academia or industry.
TraDE-OPT recruited 15 ESRs in 7 universities and 1 SME across 7 countries. We organized two Schools and one Workshop. The schools and the workshop offered scientific and soft-skills training and. The scientific part covered convex optimization, inverse problems, machine learning, industries presentations, and an algorithmic bootcamp. The training on soft-skills included entrepreneurship, writing and communication skills and CV writing and job market preparation. Our ESRs worked and achieved results in different directions in order to derive new and more efficient algorithms with theoretical convergence guarantees. The results achieved so far have been published in ~20 preprints and some of them have already been accepted for publication or published on top journals and international conferences. TraDE-OPT fellows presented TraDE-OPT results to ~15 conferences, workshops and invited seminars. They have also presented their research to the wider public via public engagement activities. TraDE-OPT fellows worked directly with industry (especially through secondments) to share their techniques with the final users.
We worked to design and prove convergence of new fast first-order algorithms exploiting the problem structure, for large-scale convex and nonconvex problems. In this direction we are exploiting the multiscale structure of images to build a multiscale approach to image denoising. We devised a new variational approach for joint image recovery-segmentation and we designed an alternating proximal-based optimization algorithm that efficiently exploits the structure of the proposed objective. In the setting of nonconvex problems we worked on a new CNC approach for image denoising and we devised higher order algorithms to minimize a nonsmooth aggregation of a collection of functions. We proposed solutions for non convex QCQP problems. In order to evaluate the performance of proposed algorithms, we provided explicit interpolation formulas for smooth and strongly convex functions.
In the context of block coordinate, parallel, asynchronous, and new structure adapted splitting methods we designed new unfolded MM approaches and implemented them on GPUs. We worked on new splitting strategies and on coordinate descent methods and their accelerated and asynchronous variants, also in the non separable case. We studied primal-dual schemes for composite optimization problem within the framework of abstract convexity. We proposed new Douglas-Rachford splitting algorithms with a prescribed communication structure and we derived a more general framework for proximal point algorithms.
In the context of stochastic and incremental algorithms, we proved the reconstruction capabilities on some reference imaging and machine learning problems of
primal-dual first order methods with additional arbitrary (incremental) projections. We worked on extending Kacmarz method for sparse least squares and impulsive noise.
To build a bridge between the methods developed in order to achieve our theoretical goals we worked on the development and empirical evaluation of an accelerated primal method for SVM and on data driven techniques for hyperparameter tuning of regularization methods for inverse problems.
TraDE-OPT ESRs derived more efficient methods for joint image recovery and segmentation and image denoising. Our work on higher order algorithms is the first complete work to deal with general composite problems in the nonconvex and nonsmooth settings. The solutions for non convex QCQP problems will be used in the construction of efficient solution methods based on successive outer approximations. The new interpolating formulas and new techniques to formally compute the derivative of the performance of an algorithm with respect to some of its parameters will allow the numerical optimization of the parameters of a first-order algorithm. The unfolding of two MM algorithms allowed the deployment of GPU-friendly tools for accelerated implementation, as well as the introduction of a supervised learning strategy for tuning automatically the regularization parameter and will allow the restoration of large datasets of mass spectrometry data. We contributed to a better theoretical understanding of coordinate descent methods and their accelerated and asynchronous variants, and this will allow for more efficient implementations. The generalized framework for proximal point algorithms provided new simplified proofs of convergence and highlights the link between existing schemes. The proposed framework allows to devise new flexible schemes and provides new ways to generalize existing splitting schemes to the case of the sum of many terms that will be exploited in the future. The exploitation of abstract convexity allows for a duality theory in this framework. The next step is to bring that theory into the existing splitting algorithm.
The understanding of the self regularization properties of primal dual methods allows to derive more efficient regularization techniques, where computations are tailored to the information in the data rather than their raw amount. The theoretical advances obtained for Kacmarz method allows for highly efficient parallelization. The extension of the Kacmarz method for sparse least squares and impulsive noise widen its applicability. We expect our preliminary results on a statistical learning analysis of data driven parameters selection in inverse problems will lead to new regularization methods in the context of image processing.
Project Logo
Results of the session on communication skills - School 2
First in presence workshop in UCLouvain
Online School organized by CentraleSupelec