Skip to main content
European Commission logo
polski polski
CORDIS - Wyniki badań wspieranych przez UE
CORDIS

Automated Model Inference from Neural Dynamics for a Mechanistic Understanding of Cognition

Periodic Reporting for period 1 - AutoMIND (Automated Model Inference from Neural Dynamics for a Mechanistic Understanding of Cognition)

Okres sprawozdawczy: 2021-05-01 do 2023-04-30

Variations in cellular and network parameters are crucial in shaping brain dynamics, which are the foundation of human cognition and behavior. This project addressed the problem of inferring properties of neural circuits using computational modeling, machine learning (ML), and brain recordings. We approached this problem as an “inverse modeling” task: We first developed biologically realistic computer simulators of neural circuits. By finding model parameters that produce simulations like experimental data, we can obtain a “digital copy” of the circuit, enabling us to dissect the model in greater detail than possible in in-vivo systems. To link recordings of neural population dynamics with their underlying circuit parameters, we used spiking neural networks (SNNs). However, with biologically realistic SNNs, it is challenging to identify a single model configuration that can reproduce experimental data, let alone many data-consistent models and their uncertainties. Therefore, we developed probabilistic ML methods to address the problem of mechanistic model identification given experimental data in neuroscience.
The ability to infer neural circuit properties from brain recordings has significant implications for society: Understanding the link between neural circuit properties, dynamics, and computations is an ongoing challenge in brain research. This applies to both healthy brains and those affected by neurological and psychiatric disorders. Being able to create and analyze data-consistent computational models of neural circuits, instead of relying solely on in-vivo counterparts, is a significant step towards this goal, potentially accelerating neuroscience research, and reducing costs and the need for invasive experiments on model organisms. Moreover, computer models allow for parallel testing of numerous interventions to study their effects on pathological brain states, increasing efficiency while minimizing risks. Finally, ML methods for inverse modeling are broadly applicable in many areas of science where simulators are used, such as astrophysics, geology, biochemistry, and more.
Here, we aimed to develop a "family" of SNN models with biologically realistic parameters. These models should exhibit a range of realistic network dynamics observed in-vivo depending on parameter values, e.g. oscillations, bursting, and asynchronous activity. Second, we developed (Bayesian) ML algorithms for inverse modeling, particularly within the framework of simulation-based inference (SBI), to enhance the accuracy and efficiency of model parameter inference. Finally, we applied these techniques to infer cellular and network properties from experimental data collected from human brain organoids and animal cortex. This enables us to infer changes across time and brain areas, advancing our understanding of early brain development and how different neural circuit properties contribute to neural dynamics and computation.
First, we developed a novel SNN model architecture composed of excitatory and inhibitory neurons with clustered connectivity, and with variable cellular and network parameters such as membrane conductance and adaptation time constant. These properties are encoded in 28 free parameters and the network exhibits a variety of experimentally observed dynamics, such as asynchrony, bursting, oscillations, chaos, etc. Many network parameters are correlated with network dynamics, and their relationships are diffuse and nonlinear. Furthermore, models exhibit degeneracy and brittleness, i.e. many parameter configurations result in similar network dynamic, and slightly different parameter values result in drastically different network dynamic.
Second, we developed a new SBI algorithm that incorporates the recently developed generalized Bayesian inference formalism. The proposed method allows the user to use arbitrary cost functions to evaluate the “goodness” of a parameter configuration in reproducing data, and is robust under the setting of model misspecification, i.e. the model cannot produce the data exactly. It similarly uses deep neural networks and is easily plugged into the existing SBI framework, where the inference result is a distribution of model parameters that can reproduce aspects of the data, but better fitting models are more frequently sampled. We evaluate the method on a variety of benchmark tasks, as well as use it to find single-neuron model parameters that can reproduce experimental recordings, and find superior performance than existing algorithms especially in cases of misspecified models.
Finally, we apply SBI to infer SNN model parameters that can produce simulations matching real data. We first validate our approach by performing inference on simulated network activity. SBI identifies many models that can reproduce both observed and unobserved features of network dynamics, revealing covariance structure and degeneracy between parameters. Applied to a dataset of brain organoid electrophysiological recordings, we automatically identify models that exhibit network bursts, while elucidating the co-evolution of cellular and network parameters over 40 weeks of development. Together, these results demonstrate how SBI can advance our understanding of the dynamical regimes of flexibly parameterized SNNs, while providing mechanistic explanations and generating hypotheses about hidden circuit properties that underlie changes in brain network dynamics.
The novel SBI algorithm is described in an arXiv preprint (Generalized Bayesian Inference for Scientific Simulators via Amortized Cost Estimation) and currently under review. Results on discovering SNN models from neural recordings using SBI have been presented at several neuroscience conferences, including SfN, Bernstein, and COSYNE meetings, and the manuscript is under preparation. Related works on inferring cognitive models and circuit wiring models from experimental data, as well as probabilistic ML methods for modeling neurophysiological recordings resulted from collaborations within the group, contributing to the overall Project goal of building ML tools to understand how neural circuits shape neural dynamics and computation.
The spiking neural network architecture described above presents several advances beyond current state of the art in terms of the different cellular and network mechanisms available, number of free parameters, and possible dynamics observed. Critically, we do not a priori fix parameter values, which prematurely limits the hypothesis space and the repertoire of network behavior. Given such a high-dimensional and complex network model, there does not currently exist methods for parameter inference given real observed data. Therefore, the combination of the network model and the application of SBI methods presents significant progress beyond the current state of the art. Our novel SBI algorithm incorporating generalized Bayesian inference is also a novel contribution to the Bayesian inverse modeling literature. It generalizes neural network based SBI algorithms and bypass the need for complicated neural density estimators, while offering a potential solution for inference under model misspecification. Altogether, these results incrementally help advance our understanding of how neural circuits shape brain dynamics and cognition, as well as offer additional tools for inverse modeling in many other scientific domains, ranging from astrophysics to synthetic biology.
Overview: inverse modeling of neural circuits
Summary of results on discovering single neuron and network models