European Commission logo
English English
CORDIS - EU research results
CORDIS

From A to B: Generalizing the mathematics of artificial neural networks (ANNs) to biological neural networks (BNNs)

Project description

Advanced mathematical tools for analysing learning in the brain

Artificial neural networks (ANNs), powered by AI, are widely used, but biological neural networks (BNNs) found in the brain are inherently more effective. While ANNs represent functions, BNNs represent stochastic processes. The brain has demonstrated faster learning and superior generalisation capabilities. However, further theoretical work is required to fully understand the learning mechanisms in the brain. In this context, the ERC-funded A2B project will develop advanced mathematical tools for analysing learning processes in BNNs. The project's objectives include gaining insights into how the brain learns, improving AI efficiency with reduced training data, and training neuromorphic computer chips to emulate BNNs. The project begins by reinterpreting the local updating process of BNN parameters as a specific and non-standard derivative-free optimisation method.

Objective

Why does the brain outperform AI? Artificial neural networks (ANNs) are at the core of the AI revolution. In the past years, enormous efforts have been made to unravel their mathematical properties, leading to fundamental insights and mathematical guarantees on when and why deep learning works well. ANNs are inspired by biological neural networks (BNNs) but differ in many respects: ANNs represent functions while BNNs represent stochastic processes, and the gradient-based deep learning applied for ANNs is very different from the local updating of BNNs.
BNNs are superior to ANNs in the sense that the brain learns faster and generalizes better. Despite the urgency for answers and the rich and interesting mathematical structures that BNNs create, scarcely any theoretical attempts have been made to understand learning in the brain. The stochastic process structure of BNNs and the need to understand the statistical convergence behavior call for a mathematical statistics approach. This project proposes the development of advanced mathematical tools in nonparametric and high- dimensional statistics to analyze learning in BNNs as a statistical method. The starting point is a novel interpretation of the local updating of BNN parameters as a specific and non-standard, derivative-free optimization method. Whereas derivative-free optimization is thought to be slow, our conjecture is that it leads to favorable statistical properties in the setting underlying BNNs.
If the research is successful, it has the potential to open a new research area in mathematical statistics and provide insights into how the brain learns. It could also lead to recommendations on how to make AI more efficient with less training data and how to train neuromorphic computer chips mimicking BNNs.

Host institution

UNIVERSITEIT TWENTE
Net EU contribution
€ 2 000 000,00
Address
DRIENERLOLAAN 5
7522 NB Enschede
Netherlands

See on map

Region
Oost-Nederland Overijssel Twente
Activity type
Higher or Secondary Education Establishments
Links
Total cost
€ 2 000 000,00

Beneficiaries (1)