Skip to main content
European Commission logo
español español
CORDIS - Resultados de investigaciones de la UE
CORDIS
CORDIS Web 30th anniversary CORDIS Web 30th anniversary

Principles of Learning in a Recurrent Neural Network

Periodic Reporting for period 3 - LeaRNN (Principles of Learning in a Recurrent Neural Network)

Período documentado: 2022-09-01 hasta 2024-02-29

Forming memories, generating predictions based on memories, and updating memories when predictions no longer match actual experience are fundamental brain functions. Dopaminergic neurons provide a so-called “teaching signal” that drives the formation and updates of associative memories across the animal kingdom. Many theoretical models propose how neural circuits could compute the teaching signals, but the actual implementation of this computation in real nervous systems is unknown. This project will discover the basic principles by which neural circuits compute the teaching signals that drive memory formation and updates using a tractable insect model system, the Drosophila larva. We will generate the following essential datasets for a distributed, multilayered, recurrent learning circuit, the mushroom body (MB)-related circuitry in the larval brain. First (Aim 1): provide a structural and functional connectivity map of the learning circuit, including all feedforward and feedback pathways upstream of all dopaminergic neurons. Second (Aim 2): discover the features encoded by the neurons in the circuit (e.g. predictions, actual reinforcement, and prediction errors) by recording their activity before, during and after memory formation. Third (Aim 3), we will develop a model of the circuit constrained by these datasets and test the predictions about the necessity and sufficiency of uniquely identified circuit elements for implementing learning algorithms by selectively manipulating their activity. Understanding the basic functional principles of an entire multilayered recurrent learning circuit in an animal has the potential to revolutionize, not only neuroscience and medicine, but also machine-learning and robotics.
Aim 1 progress: We have generated and published a complete synaptic-resolution structural connectivity map of the entire learning circuit: Eschbach et al. Nature neuroscience 2020, "Recurrent architecture for adaptive regulation of learn in the insect brain."
We have also set up two patch-clamp recording rigs and generated all the necessary transgenic fly stocks for activating individual MBONs (using LexA to drive CSChrimson expression) while recording from postsynaptic neurons (using GAL4 to drive with GFP expression) or to record from DANs while activating their presynaptic neurons. We are currently testing the functional connections in the learning circuit and characterising in detail the learning-induced changes in the functional connectivity in the learning circuit.
Aim 2 progress: We have identified the features encoded by some of the key feedback neurons in the learning circuit. We have discovered that they integrate input from neurons that encode positive and negative learnt values as well as positive and negative innate values. They compare odour drive to positive and negative value neurons and bidirectionally encode integrated predicted values of stimuli. These neurons promote actions based on the predictions they encode and they also feedback to DANs to regulate future learning. We have published these findings in Eschbach et al. eLife 2021, "Circuits for integrating learnt and innate valences in the insect brain."
We have also set up a multi-view light sheet microscope whole-brain imaging of neural activity before, during and after learning to systematically discover the efatures encoded by all the neurons in the learning circuit.
Aim 3 progress: In collaboration with Prof. Ashok Litvin-Kumar, we have developed a connectivity-constrained model of the learning circuit constrained by the structural synaptic connectivity map, published in Eschbach et al. 2020, Nature neuroscience: "Recurrent architecture for adaptive regulation of learn in the insect brain." We are currently gathering functional data to constrain the model further with functional connectivity data. We have also developed a novel automated high-throughput learning rig for exploring new learning tasks to be able to rapidly test the models' predictions about the roles of specific feedback motifs in learning.
This project will reveal the basic principles by which distributed, multilayered, recurrent neural networks compute the teaching signals that drive the formation, extinction and consolidation of memories. While the most powerful machine-learning networks are multilayered and recurrent, until now, in biological systems, learning could be studied in a few cell types at a time. This project will provide a whole-circuit view of a distributed multilayered recurrent circuit as it forms and updates memories, in a biological system. We will use the model of the learning circuit constrained by structural and functional connectivity data to generate hypotheses about potential roles of specific feedback motifs in a range of distinct learning tasks. We will test the model's predictions by manipulating these motifs during learning is freely behaving animals. Thus, by combining connectomics with physiological recordings and cell type specific manipulations of activity we will be able to establish causal relationships between specific circuit motifs and their function. The basic principles that emerge from this study can also be tested in targeted ways in larger, adult Drosophila brain, in vertebrate systems, and in artificial neural networks. Our findings could therefore have implications for cognitive and systems neuroscience, medicine, machine-learning and robotics.