Skip to main content

Prediction and validation of in vivo dendritic processing

Periodic Reporting for period 1 - DendritesInVivo (Prediction and validation of in vivo dendritic processing)

Reporting period: 2020-03-01 to 2022-02-28

Information is processed in the brain through communication among networks of neurons. Understanding these interactions will bring great insights into how we perceive the outside world, execute motor actions and store memories, and where these functions go wrong in injured or diseased brain states. Unlike the circuits in a digital computer, neural networks are built from living cells with complex morphologies and biophysical properties. These cellular features impose fundamental limits on communication within the brain but may also hold the key to its unrivaled computational power. Neurons receive the majority of input on their dendrites - thin processes that emanate from cell bodies in elaborately branched tree structures. Physical constraints mean that incoming signals are subject to severe attenuation as they are integrated to produce the output response of a neuron. However, nonlinear interactions within the dendritic tree may compensate for this, or even allow mathematical operations to be performed on the input that are often thought to require entire networks.

In this project we explored the hypothesis that neural communication is tailored to make maximal use of dendritic capabilities. We developed a new theoretical approach for investigating the function of single neurons by combining biophysical modeling with machine learning. We applied this to address two main questions: 1) How should input to a neuron be structured to maximize the discriminability of different stimuli?; 2) For a given regime of input and computational task, how are the structure and biophysics of dendrites predicted to be exploited by the brain? Our simulations and analysis showed that single neurons are exquisitely sensitive to both the spatial and temporal structure of their inputs. When information is encoded in both of these input properties simultaneously, distinct processing strategies can be synergistically combined to maximize computational power. Focusing on a canonical 'feature-binding' problem, we derived experimentally testable predictions about how two different biophysical mechanisms can be exploited for nonlinear computation, and how their relative contributions will vary throughout a dendritic tree.
The objectives of the project were achieved via three phases of work. First, I constructed a realistic computational model of a neuron to be used as the foundation of the study. For this I utilized a digitally reconstructed cell morphology from mouse visual cortex and fitted biophysical mechanisms to replicate experimentally observed behavior. The model neuron receives input to one thousand synapses distributed across the dendritic tree, which it integrates nonlinearly to produce an output voltage signal at the cell body. Second, working from a foundation of cable theory, I derived a learning rule to tune the strengths of the synapses such that the model neuron can be 'trained' to perform a given computational task. Motivated by approaches with artificial neural networks, this provides a general means to translate the known biology of a neuron and assumptions about input statistics into a measure of computational capacity. To be able to apply the learning rule in simulations, I wrote and published a software package for efficient numerical solution of the governing equations. Finally, I performed an extensive set of simulations, using these tools to objectively uncover the preferred input regimes and mechanisms for implementing nonlinear dendritic computations. The results of this project were recently published (Bicknell, B.A. & Häusser, M. (2021) A synaptic learning rule for exploiting nonlinear dendritic computation, Neuron, 109, 1-17), and software made publicly available at Further dissemination of the results will be undertaken at scientific conferences and invited seminar talks over the course of 2022 (previously postponed due to the Covid-19 pandemic).
A major novelty of this project is the development of a methodology that brings together elements of biophysical modeling and machine learning. Previous work in this field has mostly used models that are hand-tuned to demonstrate or test specific mechanisms. While these studies have been invaluable for elucidating the potential of dendrites to implement computations, our approach, by outsourcing much of the 'thinking' to a computer, is more objective and easily generalized to other cell types and computational tasks. Moreover, our study reveals not only what can be computed by single neurons, but how these computations can be learned - as they must be in the brain. This is the first theoretical demonstration that a complex biological neuron can learn nonlinear functions simply by tuning its synaptic weights. By relating the biological and computational functions of neurons, I anticipate this project will establish a foundation for mechanistic understanding of neural processing in healthy and diseased brains and help propel the next wave of biologically inspired approaches to computing.
graphical summary