European Commission logo
English English
CORDIS - EU research results

Neural Network : An Overparametrization Perspective

Project description

More is better but why: understanding successful neural network training models

Neural networks can ‘learn’ from input data and scenarios, improving their predictive ability for similar and different problems with successive iterations. So-called overparameterised models are among the most popular for training neural networks. These have more parameters than can be estimated from the training data, i.e. there are more parameters than needed to perfectly fit all the data points. Despite their empirical success, theoretical understanding of how these models are optimised and how their results are generalised to yield universal approximation remains poorly understood. With the support of the Marie Skłodowska-Curie Actions programme, the NN-OVEROPT project will enhance understanding to provide better optimisation algorithms for training.


In recent times, overparametrized models where the number of model parameters far exceeds the number of training samples available are the methods of choice for learning problems and neural networks are amongst the most popular overparametrized methods used heavily in practice. It has been discovered recently that overparametrization surprisingly improves the optimization landscape of a complex non-convex problem, i.e. the training of neural networks, and also has positive effects on the generalization performance. Despite improved empirical performance of overparametrized models like neural networks, the theoretical understanding of these models is quite limited which hinders the progress of the field in the right direction. Any progress in the understanding of the optimization as well as generalization aspects for theses complex models especially neural networks will lead to big technical advancement in the field of machine learning and artificial intelligence. During the Marie Sklodowska-Curie Actions Individual Fellowship-Global Fellowship (MSCA-IF-GF), I plan to study the optimization problem arising while training overparametrized neural networks and generalization in overparametrized neural networks. The end goal for this project is to provide better theoretical understanding of the optimization landscape while training overparametrized models as a result of which to provide better optimization algorithms for training as well as to study the universal approximation guarantees of overparametrized models. We also aim to study the implicit bias induced by optimization algorithms while training overparametrized complex models. To achieve the objective discussed above, I will be using tools from traditional optimization theory, statistical learning theory, gradient flows, as well as from statistical physics.


Net EU contribution
€ 257 619,84
78153 Le Chesnay Cedex

See on map

Ile-de-France Ile-de-France Yvelines
Activity type
Research Organisations
Total cost
€ 257 619,84

Partners (1)