Skip to main content

Flexible and robust nervous system function from reconfiguring networks

Periodic Reporting for period 2 - FLEXNEURO (Flexible and robust nervous system function from reconfiguring networks)

Reporting period: 2018-08-01 to 2020-01-31

The problem this project addresses:
It is now possible to peer inside a living brain and measure the activity of thousands of nerve cells during behaviour. Remarkably, this means we can 'read' the neural code in a living brain and predict the action or percept being processed. However, the brain is in constant flux: the connections between nerve cells and the activity patterns they generate are continually rearranging. Some of this is due to learning and ongoing experience, but recently it has been found that activity patterns rearrange over days and weeks even for very well-rehearsed tasks and familiar percepts.

This continual rearrangement is a scientific mystery as well as being a tremendous engineering obstacle. It is mysterious because existing theories of brain function suppose that stable memories and representations of the world should have stable neural activity patterns underlying them. These theories underpin contemporary neuroscience, and the observation that neural activity continually reshapes throws such theory into question. This project is building a replacement theory. A continually reconfiguring brain is an engineering obstacle for designing brain-machine interfaces (for example, prosthetic devices) that monitor and decode brain activity. Such devices can revolutionise medicine and consumer technology, but face the problem that living brains do not appear to retain consistent neural codes. We have analysed data from real, living brains to look for ways to decode behaviour reliably despite this ongoing reconfiguration.

Importance for society:
Together, these two problems, which pose scientific and engineering problems respectively, are tremendously important for society. First, the scientific importance: our brains determine who we are. Understanding how the activity and structure of the nervous system gives rise to behaviour is perhaps the biggest unsolved problem in science. Tackling it will not only expand human knowledge, it also has the potential to provide inspiration for new technology. In our work we have already uncovered surprising consequences of reconfiguring circuits for learning that can be applied to artificial learning algorithms (described below). Secondly, biomedical engineering is currently moving brain-machine interfaces closer to being a widespread technology for assistive, therapeutic and diagnostic applications. The data we analyse comes from carefully controlled experiments that monitor bran activity during repeated tasks. We have found ways to decode this activity reliably over long time periods, which is a crucial step toward understanding how to make a brain-machine interface reliable.

Overall objectives:
The overall objectives of the project are:
(1) To update our current understanding of how neural activity is controlled within single nerve cells and circuits and how this gives rise to consistent behaviour, taking account of recent data that shows that there is not necessarily a consistent map between the two.
(2) To explore the consequences of this updated theory for learning and neural circuit function: does it predict unanticipated features of brain function? Does it suggest ways to improve artificial learning algorithms?
(3) To test our theories against experimental data and suggest new experiments that can test these theories.
The work completed so far in this project is as follows.

We have built mathematical models that capture how single neurons and neural circuits homeostatically control their signalling components and structure. We find that for a very broad range of assumptions, the mechanisms that control neural properties face a dilemma: either they enforce rapid and precise changes and risk becoming unstable, or they tolerate imprecision in neural signalling properties. We also find that self-regulating, homeostatic mechanisms that are tuned to cope with some kinds of changes (e.g. those that occur during growth and development) become vulnerable to other kinds of perturbations, such as loss of a specific gene.

We have developed mathematical theories and computer models of brain circuit structures that allow continual reconfiguration to occur without destroying stored memories and overall function. So far this has resulted in a surprising finding: excess, 'redundant' connections in the brain can enable faster and more accurate learning, even will imperfect learning rules. This theory explains experimental measurements of neural circuits, which show that many parts of the brain have many redundant paths between the same neurons. However, the theory also predicts that is neural connections are unreliable (which they are in a living system) then there is an upper limit to the benefit of having redundant pathway, above which learning becomes impaired. These results shed new light on biological brain function as well as suggesting ways that artificial neural networks can be improved, making new connections between neuroscience and artificial intelligence.

We have analysed existing neural data that shows a gradual but almost complete reconfiguration in neural activity during a familiar task in a neural circuit involved in planning and representing motor actions. We were able to identify a way to make a relatively stable mapping between neural activation and behaviour despite reconfiguration. The existence of this approximately stable mapping suggests that the observations are not in contradiction with the brain keeping a faithful representation of the world, but they do force us to revise existing theories about how this occurs. One additional outcome of this work is that it offers a potential technological path to building reliable brain-machine interfaces. This is not something we can attempt in this project, but our results will help neuroscientists and engineers who are directly involved in such work.

Parts of these sub-projects are ongoing and have led to some preliminary results that are not yet complete, which we are currently working on. For example, we have found, surprisingly, that for a neural circuit to optimally store a representation by reconsolidation (a process that repeatedly reinforces a memory by recalling the events that led to it), the total amount of systematic change in the synapses that store the memory trace should not exceed the total change due to random biological noise. This is surprising because it tells us that memories last longer when the signals that reinforce them do not completely dominate ongoing, noisy fluctuations that have nothing to do with the memory. This result in fact predicts that continual reconfiguration is inevitable in a neural circuit. We are currently in the process of testing this result - which is based on mathematical calculation - in careful simulations as well as looking for experimental data that can corroborate or falsify our predictions.
So far we have delivered several results that progress current knowledge beyond the state of the art.

1. We have discovered a fundamental relationship between the degree of 'redundancy' in a neural circuit and its capacity to learn. Large nervous systems, such as the human brain, contain many redundant pathways between each nerve cell. This has been known for many decades but lacks an explanation. Our work showed how and why such redundancy allows large neural circuits to learn faster and to higher accuracy. It also shows that there is an upper limit to the benefit of redundant connections in a biological nervous system. The implications of this work extend beyond biology to Artificial Intelligence, where bigger artificial networks have been observed to have better learning performance. This observation had no firm theoretical explanation and our work provides one, allowing principled design of faster and better learning algorithms.

2. Using existing experimental data, we have shown - in principle - that the continual circuit reconfiguration in biological neural circuits can be compensated by an appropriate circuit structure and by physiologically reasonable levels of synaptic change in the circuit. This partly resolves the mysterious observations that this project set out to explain: continual reconfiguration inside living brains is not completely random, it preserves some relationships without preserving precise connections. Our work identifies how consistent behaviour can emerge from ongoing change in neural activity and provides a set of algorithms that could be used to read this code reliably from a living brain.

3. We have used new tools in control engineering to understand more rigorously how neurons maintain consistent properties in spite of ongoing change in the nervous system. This work points to fundamental limitations in the capacity of biological neural circuits to cope with change while remaining stable. This has also brought together researchers in different fields (control engineering and neurophysiology) that do not ordinarily interact.

In the time remaining in this project we will build on these results by:

- applying the theory we developed in (1) to understand how memories can be preserved over extended time periods, using experimental data where possible to test our predictions;

- extend the analysis of existing neural data in (2) to other brain areas to see how general our findings are, and whether other parts of the brain reconfigure in different ways, e.g. more slowly or more quickly, and whether learning new tasks will affect this;

- develop the models and theory in (3) to understand how adaptive properties of single neurons permit circuits to change over time without damaging their performance or losing stability.