Skip to main content
European Commission logo
español español
CORDIS - Resultados de investigaciones de la UE
CORDIS

Flexible and robust nervous system function from reconfiguring networks

Periodic Reporting for period 4 - FLEXNEURO (Flexible and robust nervous system function from reconfiguring networks)

Período documentado: 2021-08-01 hasta 2023-07-31

It is now possible to peer inside a living brain and measure the activity of thousands of nerve cells during behaviour. Remarkably, this means we can 'read' the neural code in a living brain and predict the action or percept being processed. However, the brain is in constant flux: the connections between nerve cells and the activity patterns they generate are continually rearranging. Some of this is due to learning and ongoing experience, but recently it has been found that activity patterns rearrange over days and weeks even for very well-rehearsed tasks and familiar percepts.

This continual rearrangement is a scientific mystery as well as being a tremendous engineering obstacle. It is mysterious because existing theories of brain function suppose that stable memories and representations of the world should have stable neural activity patterns underlying them. These theories underpin contemporary neuroscience, and the observation that neural activity continually reshapes throws such theory into question. This project is building a replacement theory. A continually reconfiguring brain is an engineering obstacle for designing brain-machine interfaces (for example, prosthetic devices) that monitor and decode brain activity. Such devices can revolutionise medicine and consumer technology, but face the problem that living brains do not appear to retain consistent neural codes. We have analysed data from real, living brains to look for ways to decode behaviour reliably despite this ongoing reconfiguration.

These problems pose scientific and engineering challenges that are tremendously important for society. Our brains determine who we are. Understanding how the activity and structure of the nervous system gives rise to behaviour is perhaps the biggest unsolved problem in science. Tackling it will not only expand human knowledge, it also has the potential to provide inspiration for new technology. In our work we have already uncovered surprising consequences of reconfiguring circuits for learning that can be applied to artificial learning algorithms. Secondly, biomedical engineering is currently moving brain-machine interfaces closer to being a widespread technology for assistive, therapeutic and diagnostic applications. We have found ways to decode this activity reliably over long time periods, which is a crucial step toward understanding how to make a brain-machine interface reliable.
We have built mathematical models that capture how single neurons and neural circuits homeostatically control their signalling components and structure. We discovered that mechanisms that control neural properties face a dilemma: either they enforce rapid and precise changes and risk becoming unstable, or they tolerate imprecision in neural signalling properties.

We have developed mathematical theories and computer models of brain circuits that allow continual reconfiguration to occur without destroying stored memories. So far this has resulted in a surprising finding: excess, 'redundant' connections in the brain can enable faster and more accurate learning, even will imperfect learning rules. This theory explains experimental measurements of neural circuits, which show that many parts of the brain have many redundant paths between the same neurons. However, the theory also predicts that if neural connections are unreliable (which they are in a living system) then there is an upper limit to the benefit of having redundant pathway, above which learning becomes impaired. These results shed new light on biological brain function as well as suggesting ways that artificial neural networks can be improved, making new connections between neuroscience and artificial intelligence.

We have analysed existing neural data that shows a gradual but almost complete reconfiguration in neural activity during a familiar task in a neural circuit involved in planning and representing motor actions. We were able to identify a way to make a relatively stable mapping between neural activation and behaviour despite reconfiguration. The existence of this approximately stable mapping suggests that the observations are not in contradiction with the brain keeping a faithful representation of the world, but they do force us to revise existing theories about how this occurs. One additional outcome of this work is that it offers a potential technological path to building reliable brain-machine interfaces.

We have also found, surprisingly, that for a neural circuit to optimally store a memory, the total amount of systematic change in the synapses that store the memory trace should not exceed the total change due to random biological noise. This is surprising because it tells us that memories last longer when the signals that reinforce them do not completely dominate ongoing, noisy fluctuations that have nothing to do with the memory. This result predicts that continual reconfiguration is inevitable in a biological circuit.

Selected references:
Micou, C, & O'Leary, T (2023). Current Opinion in Neurobiology, Rule ME & O’Leary T* (2022) PNAS, Józsa M, et al (2022) PNAS, Raman DV & O'Leary T (2021). eLife, Rule ME et al (2020) eLife.
1. We have discovered a fundamental relationship between the degree of 'redundancy' in a neural circuit and its capacity to learn. Large nervous systems, such as the human brain, contain many redundant pathways between each nerve cell. This has been known for many decades but lacks an explanation. Our work showed how and why such redundancy allows large neural circuits to learn faster and to higher accuracy. It also shows that there is an upper limit to the benefit of redundant connections in a biological nervous system. The implications of this work extend beyond biology to Artificial Intelligence, where bigger artificial networks have been observed to have better learning performance. This observation had no firm theoretical explanation and our work provides one, allowing principled design of faster and better learning algorithms.

2. Using existing experimental data, we have shown - in principle - that the continual circuit reconfiguration in biological neural circuits can be compensated by an appropriate circuit structure and by physiologically reasonable levels of synaptic change in the circuit. This partly resolves the mysterious observations that this project set out to explain: continual reconfiguration inside living brains is not completely random, it preserves some relationships without preserving precise connections. Our work identifies how consistent behaviour can emerge from ongoing change in neural activity and provides a set of algorithms that could be used to read this code reliably from a living brain.

3. We have used new tools in control engineering to understand more rigorously how neurons maintain consistent properties in spite of ongoing change in the nervous system. This work points to fundamental limitations in the capacity of biological neural circuits to cope with change while remaining stable. This has also brought together researchers in different fields (control engineering and neurophysiology) that do not ordinarily interact.
screenshot-2024-03-04-at-08-13-38.png