Periodic Reporting for period 3 - SEQUOIA (Robust algorithms for learning from modern data)
Reporting period: 2020-09-01 to 2022-02-28
Given this new setting, existing optimization-based algorithms are not adapted. The main objective of this proposal is to push the frontiers of supervised machine learning, in terms of (a) scalability to data with massive numbers of observations, features, and tasks, (b) adaptability to modern computing environments, in particular for parallel and distributed processing, (c) provable adaptivity and robustness to problem and hardware specifications, and (d) robustness to non-convexities inherent in machine learning problems.
To achieve the expected breakthroughs, we will design a novel generation of learning algorithms amenable to a tight convergence analysis with realistic assumptions and efficient implementations. They will help transition machine learning algorithms towards the same wide-spread robust use as numerical linear algebra libraries. Outcomes of the research described in this proposal will include algorithms that come with strong convergence guarantees and are well-tested on real-life benchmarks coming from computer vision, bioinformatics, audio processing and natural language processing. For both distributed and non-distributed settings, we will release open-source software, adapted to widely available computing platforms.
- Distributed algorithms:
We have proposed a general framework for the analysis of distributed optimization algorithms, that allows to both (a) provide new algorithms and their convergence bounds, and (b) prove lower bound of complexity, stating that no algorithms can ever achieved a better complexity. This was done both for centralized and decentralized approaches, and for machine learning problems and for problems beyond machine learning.
- Stochastic gradient algorithms:
In a series of papers, we provided a refined analysis of stochastic gradient techniques for positive definite kernel methods, showing in particular that (1) they could converge exponentially fast under some common scenarios, and (2) multiple passes could be beneficial (this is the first time this is provably mathematically).
- Analysis of neural network training:
In a series of papers, we analyzed how gradient descent could lead to global convergence guarantees, which is particularly difficult because this is a non-convex optimization problem.