Skip to main content
European Commission logo
English English
CORDIS - EU research results
CORDIS
CORDIS Web 30th anniversary CORDIS Web 30th anniversary

Robust algorithms for learning from modern data

Periodic Reporting for period 3 - SEQUOIA (Robust algorithms for learning from modern data)

Reporting period: 2020-09-01 to 2022-02-28

Machine learning is needed and used everywhere, from science to industry, with a growing impact on many disciplines. While first successes were due at least in part to simple supervised learning algorithms used primarily as black boxes on medium-scale problems, modern data pose new challenges. Scalability is an important issue of course: with large amounts of data, many current problems far exceed the capabilities of existing algorithms despite sophisticated computing architectures. But beyond this, the core classical model of supervised machine learning, with the usual assumptions of independent and identically distributed data, or well-defined features, outputs and loss functions, has reached its theoretical and practical limits.

Given this new setting, existing optimization-based algorithms are not adapted. The main objective of this proposal is to push the frontiers of supervised machine learning, in terms of (a) scalability to data with massive numbers of observations, features, and tasks, (b) adaptability to modern computing environments, in particular for parallel and distributed processing, (c) provable adaptivity and robustness to problem and hardware specifications, and (d) robustness to non-convexities inherent in machine learning problems.

To achieve the expected breakthroughs, we will design a novel generation of learning algorithms amenable to a tight convergence analysis with realistic assumptions and efficient implementations. They will help transition machine learning algorithms towards the same wide-spread robust use as numerical linear algebra libraries. Outcomes of the research described in this proposal will include algorithms that come with strong convergence guarantees and are well-tested on real-life benchmarks coming from computer vision, bioinformatics, audio processing and natural language processing. For both distributed and non-distributed settings, we will release open-source software, adapted to widely available computing platforms.
During the first part of the project, we have focused primarily on three important topics of the project:
- Distributed algorithms:
We have proposed a general framework for the analysis of distributed optimization algorithms, that allows to both (a) provide new algorithms and their convergence bounds, and (b) prove lower bound of complexity, stating that no algorithms can ever achieved a better complexity. This was done both for centralized and decentralized approaches, and for machine learning problems and for problems beyond machine learning.
- Stochastic gradient algorithms:
In a series of papers, we provided a refined analysis of stochastic gradient techniques for positive definite kernel methods, showing in particular that (1) they could converge exponentially fast under some common scenarios, and (2) multiple passes could be beneficial (this is the first time this is provably mathematically).
- Analysis of neural network training:
In a series of papers, we analyzed how gradient descent could lead to global convergence guarantees, which is particularly difficult because this is a non-convex optimization problem.
In the three areas outlined above, we plan to continue to make progress, by providing algorithms with both good empirical behavior and convergence guarantees. Our new effort in structured prediction problems should lead to generic algorithms for a wide set of applications of machine learning.