## Final Report Summary - NONRANDOM CIRCUITS (Origin and function of nonrandom cortical connectivities)

Overview

Our perceptions, thoughts, and actions emerge from the activity of billions of neurons in our brains. Neurons are, however, relatively simple biological devices that respond to stimulation with a stereotypical and well-understood response. Understanding how a big collection of such simple cells can give rise to the sheer complexity of animal and human behavior is one of the challenges of neuroscience today. Of course neurons are not isolated but they are interconnected forming a vast complex network composed of different brain regions, subregions, and nuclei that are, in turn, organized in smaller local circuits. These local circuits consist of several thousands of neurons densely interconnected, and are thought to be the computational building blocks of the brain. To understand how the brain processes information, therefore, it is essential to understand the dynamics of local circuits and thus the dynamics of networks of neurons.

A starting point for this endeavor is to make minimal assumptions regarding the the dynamics of isolated neurons and the way neurons are connected —i.e. their connectivity. With simplified assumptions one can get insight into the system with analytical tools and, more important, one can identify the relevant variables governing the system at hand. In this spirit, a lot of progress has been made with network models in which neurons are single points characterized by one variable and connections are as generic as possible. By generic connections we mean that no particular structure is assumed or, more concretely, that connections are random. Models of this type have shed light into the irregularity of neuronal activity in the cortex, the presence of oscillating activity in large populations of neurons, or the possible neural substrates of short term memory, to name just a few.

Goal

Our project involved a step towards biological reality by refining the assumption of random connections. In the last decade several electrophysiological experiments have consistently shown that, while cortical neurons are connected randomly as a first approximation, connections follow nontrivial statistical patterns. The most prominent pattern is stated as follows: if a neuron connects to another neuron, the second neuron will connect back to the first more often than one would expect by chance. In more technical words, there is an overrepresentation of bidirectional connections. The goal of our project was to understand how this overrepresentation affected the dynamics of neuronal networks and to analyze its potential functionality.

Results

To study the effects of overrepresented bidirectional connections, we framed our analysis on random networks where the connections between any pair of neurons are not independent but correlated. Put differently: networks where, if you pick any two neurons, say and , and you find that connection is strong, then it is likely that connection is strong as well (and analogously if the connections happen to be weak). This 'likeliness' is measured by the correlation of pair-connecting strengths.

Because we wanted to isolate the role of connectivity in shaping the dynamics of neuronal networks, we first investigated the dynamics of networks of simple rate units, in which the neuronal activity is characterized by the average emission rate of action potentials (firing rate models). These simplified neuron models represent a departure from biological reality, but they are easier to analyze than the so-called spiking neuron models, in which neuronal activity is described by the time evolution of the membrane potential and the precise timing of emitted action potentials. Despite the apparent simplicity of firing rate models, their dynamical repertoire is rich enough to help us understand a lot of the phenomenology of spiking network models and real circuits. In our particular case, large networks of nonlinear rate units connected through random connections exhibit two different regimes for neuronal activity: either it decays to a steady configuration or it keeps evolving in a chaotic and highly heterogeneous way. These two regimes may have important functional and computational consequences.

We have investigated how correlated weights modifies the neuronal dynamics in the two regimes exhibited by these networks. Correlated weights modify the spectrum of eigenvalues of the system linearized around a steady configuration (a fixed point), and the spectrum of eigenvalues has a strong influence on the dynamics of the network. In particular, increasing the correlation between weights flattens the spectrum of eigenvalues and this modification can be linked to the decrease of the onset of chaotic activity and the increase the characteristic timescale or reverberating activity---that is, the period over which the network keeps track of its own activity. We have analyzed the dependence of of characteristic timescale on the correlation among weights, using simulations and theoretical techniques (dynamical mean-field, random matrix theory). To better understand the nature of the slowing down of reverberating activity, we also considered even simpler networks composed of linear units and studied how the overlaps among eigenvectors shape the time course of transient perturbations. The study demonstrated that the knowledge of the eigenspectrum conveys only partial information on the dynamics, and that the structure of eigenvectors plays also an important role. Finally, we have investigated the phenomenon by which, for sufficiently correlated weights, dynamics in the chaotic regime exhibit 'aging', in that the average relaxation time of the system grows as time evolves. We are currently finalizing a manuscript that will report these results.

Potential impact

The project has addressed a timely and relevant problem in neuroscience by applying concepts from Mathematics and Physics, like random matrix theory or dynamical mean field. To carry out the project we have interacted with a wide variety of researchers, an interaction that has been facilitated by the strong scientific presence in the Parisian area and which, we hope, will foster further cross-field collaborations. All the more so since our project has opened several research fronts that will certainly outlast the original research plan. We also think that the active participation in conferences and seminars has contributed to maintain the excellent level of European research in the area of computational and theoretical neuroscience.

Our perceptions, thoughts, and actions emerge from the activity of billions of neurons in our brains. Neurons are, however, relatively simple biological devices that respond to stimulation with a stereotypical and well-understood response. Understanding how a big collection of such simple cells can give rise to the sheer complexity of animal and human behavior is one of the challenges of neuroscience today. Of course neurons are not isolated but they are interconnected forming a vast complex network composed of different brain regions, subregions, and nuclei that are, in turn, organized in smaller local circuits. These local circuits consist of several thousands of neurons densely interconnected, and are thought to be the computational building blocks of the brain. To understand how the brain processes information, therefore, it is essential to understand the dynamics of local circuits and thus the dynamics of networks of neurons.

A starting point for this endeavor is to make minimal assumptions regarding the the dynamics of isolated neurons and the way neurons are connected —i.e. their connectivity. With simplified assumptions one can get insight into the system with analytical tools and, more important, one can identify the relevant variables governing the system at hand. In this spirit, a lot of progress has been made with network models in which neurons are single points characterized by one variable and connections are as generic as possible. By generic connections we mean that no particular structure is assumed or, more concretely, that connections are random. Models of this type have shed light into the irregularity of neuronal activity in the cortex, the presence of oscillating activity in large populations of neurons, or the possible neural substrates of short term memory, to name just a few.

Goal

Our project involved a step towards biological reality by refining the assumption of random connections. In the last decade several electrophysiological experiments have consistently shown that, while cortical neurons are connected randomly as a first approximation, connections follow nontrivial statistical patterns. The most prominent pattern is stated as follows: if a neuron connects to another neuron, the second neuron will connect back to the first more often than one would expect by chance. In more technical words, there is an overrepresentation of bidirectional connections. The goal of our project was to understand how this overrepresentation affected the dynamics of neuronal networks and to analyze its potential functionality.

Results

To study the effects of overrepresented bidirectional connections, we framed our analysis on random networks where the connections between any pair of neurons are not independent but correlated. Put differently: networks where, if you pick any two neurons, say and , and you find that connection is strong, then it is likely that connection is strong as well (and analogously if the connections happen to be weak). This 'likeliness' is measured by the correlation of pair-connecting strengths.

Because we wanted to isolate the role of connectivity in shaping the dynamics of neuronal networks, we first investigated the dynamics of networks of simple rate units, in which the neuronal activity is characterized by the average emission rate of action potentials (firing rate models). These simplified neuron models represent a departure from biological reality, but they are easier to analyze than the so-called spiking neuron models, in which neuronal activity is described by the time evolution of the membrane potential and the precise timing of emitted action potentials. Despite the apparent simplicity of firing rate models, their dynamical repertoire is rich enough to help us understand a lot of the phenomenology of spiking network models and real circuits. In our particular case, large networks of nonlinear rate units connected through random connections exhibit two different regimes for neuronal activity: either it decays to a steady configuration or it keeps evolving in a chaotic and highly heterogeneous way. These two regimes may have important functional and computational consequences.

We have investigated how correlated weights modifies the neuronal dynamics in the two regimes exhibited by these networks. Correlated weights modify the spectrum of eigenvalues of the system linearized around a steady configuration (a fixed point), and the spectrum of eigenvalues has a strong influence on the dynamics of the network. In particular, increasing the correlation between weights flattens the spectrum of eigenvalues and this modification can be linked to the decrease of the onset of chaotic activity and the increase the characteristic timescale or reverberating activity---that is, the period over which the network keeps track of its own activity. We have analyzed the dependence of of characteristic timescale on the correlation among weights, using simulations and theoretical techniques (dynamical mean-field, random matrix theory). To better understand the nature of the slowing down of reverberating activity, we also considered even simpler networks composed of linear units and studied how the overlaps among eigenvectors shape the time course of transient perturbations. The study demonstrated that the knowledge of the eigenspectrum conveys only partial information on the dynamics, and that the structure of eigenvectors plays also an important role. Finally, we have investigated the phenomenon by which, for sufficiently correlated weights, dynamics in the chaotic regime exhibit 'aging', in that the average relaxation time of the system grows as time evolves. We are currently finalizing a manuscript that will report these results.

Potential impact

The project has addressed a timely and relevant problem in neuroscience by applying concepts from Mathematics and Physics, like random matrix theory or dynamical mean field. To carry out the project we have interacted with a wide variety of researchers, an interaction that has been facilitated by the strong scientific presence in the Parisian area and which, we hope, will foster further cross-field collaborations. All the more so since our project has opened several research fronts that will certainly outlast the original research plan. We also think that the active participation in conferences and seminars has contributed to maintain the excellent level of European research in the area of computational and theoretical neuroscience.