Objective
Rich internal representations of complex data are crucial to the predictive power of neural networks. Unfortunately, current statistical analyses are restricted to over-simplified networks, whose representations (i.e. weight matrices) are either random, and/or project the data in comparatively very large or very low dimensional spaces; in many applications the situation is very different. The modelisation of realistic data is another issue. There is an urgent need to reconcile theory and practice.
Based on a synergy of the mathematical physics of spin glasses, matrix-models from physics, and information and random matrix theory, CHORAL’s statistical framework will delimit computational gaps in the learning, from structured data, of much more realistic models of neural networks. These gaps will quantify the discrepancy between:
(i) the statistical cost of learning good representations, i.e. the minimal amount of training data required to reach a satisfactory predictive performance;
(ii) the cost of efficiency, i.e. the amount of data needed when learning using tractable algorithms, such as approximate message-passing and noisy gradient descents.
Comparing these costs will quantify when learning is computationally hard or not.
To achieve this, CHORAL will first focus on dictionary learning, another essential task of representation learning, and then move on to multi-layer neural networks, which can be thought of as concatenated dictionary learning problems.
CHORAL’s ambitious program, by defining benchmarks for algorithms used in virtually all fields of science and technology will have a direct practical impact. Equally important will be its conceptual impact: the study of information processing systems has become a major source of inspiration for mathematics.
Fields of science
Keywords
Programme(s)
- HORIZON.1.1 - European Research Council (ERC) Main Programme
Topic(s)
Funding Scheme
ERC - Support for frontier research (ERC)Host institution
75007 Paris
France