Periodic Reporting for period 1 - TraDE-OPT (Training Data-driven Experts in OPTimization)
Reporting period: 2020-06-01 to 2022-05-31
The key aspect of the TraDE-Opt research is the exploitation of structure, in the data, in the model, or in the computational platform, to derive new and more efficient algorithms with guarantees on their computational performance, based on decomposition and incremental/stochastic strategies, allowing parallel and distributed implementations.
Advances in these directions will determine impressive scalability benefits to the class of the considered optimization methods, that will allow the solution of real world problems. To achieve this goal, we are offering an innovative training program, giving a solid technical background combined with employability skills: enterpreneurship, communication, and career planning skills. Integrated training of the fellows takes place at the host institute and by secondments, workshops, and schools. As a result, TraDE-Opt fellows will be prepared for outstanding careers in academia or industry.
We worked to design and prove convergence of new fast first-order algorithms exploiting the problem structure, for large-scale convex and nonconvex problems. In this direction we are exploiting the multiscale structure of images to build a multiscale approach to image denoising. We devised a new variational approach for joint image recovery-segmentation and we designed an alternating proximal-based optimization algorithm that efficiently exploits the structure of the proposed objective. In the setting of nonconvex problems we worked on a new CNC approach for image denoising and we devised higher order algorithms to minimize a nonsmooth aggregation of a collection of functions. We proposed solutions for non convex QCQP problems. In order to evaluate the performance of proposed algorithms, we provided explicit interpolation formulas for smooth and strongly convex functions.
In the context of block coordinate, parallel, asynchronous, and new structure adapted splitting methods we designed new unfolded MM approaches and implemented them on GPUs. We worked on new splitting strategies and on coordinate descent methods and their accelerated and asynchronous variants, also in the non separable case. We studied primal-dual schemes for composite optimization problem within the framework of abstract convexity. We proposed new Douglas-Rachford splitting algorithms with a prescribed communication structure and we derived a more general framework for proximal point algorithms.
In the context of stochastic and incremental algorithms, we proved the reconstruction capabilities on some reference imaging and machine learning problems of
primal-dual first order methods with additional arbitrary (incremental) projections. We worked on extending Kacmarz method for sparse least squares and impulsive noise.
To build a bridge between the methods developed in order to achieve our theoretical goals we worked on the development and empirical evaluation of an accelerated primal method for SVM and on data driven techniques for hyperparameter tuning of regularization methods for inverse problems.
The understanding of the self regularization properties of primal dual methods allows to derive more efficient regularization techniques, where computations are tailored to the information in the data rather than their raw amount. The theoretical advances obtained for Kacmarz method allows for highly efficient parallelization. The extension of the Kacmarz method for sparse least squares and impulsive noise widen its applicability. We expect our preliminary results on a statistical learning analysis of data driven parameters selection in inverse problems will lead to new regularization methods in the context of image processing.