CORDIS - Forschungsergebnisse der EU
CORDIS

Material Constraints Enabling Human Cognition

Periodic Reporting for period 2 - MatCo (Material Constraints Enabling Human Cognition)

Berichtszeitraum: 2022-04-01 bis 2023-09-30

Recent breakthroughs in comparative neurobiological research highlight specific features of the connectivity structure of the human brain, which open new perspectives on understanding the neural mechanisms of human-specific higher cognition and language. In delineating the material basis of human cognition and language, neurobiologically founded modelling appears as the method of choice, as it allows not only for ‘external fitting’ of models to key experimental data, but, in addition, for ‘internal’ or ‘material fitting’ of the model components to the structure of brains, cortical areas and neuronal circuits.

This novel research pathway offers biologically well-founded and computationally precise perspectives on addressing exciting hitherto unanswered fundamental questions about higher brain functions, such as the following: How can humans build vocabularies of tens and hundreds of thousands of words, whereas our closest evolutionary relatives typically use below 100? How is semantic meaning implemented for gestures and words, and, more specifically, for referential and categorical terms? How can grounding and interpretability of abstract symbols be anchored biologically? Which features of connectivity between nerve cells are crucial for the formation of discrete representations and categorial combination? Would modelling of cognitive functions using brain-constrained networks allow for better predictions on brain activity indexing the processing of signs and their meaning?

The ERC Advanced Grant project “Material Constraints Enabling Human Cognition” or “MatCo” led by Prof Pulvermüller uses novel insights from human neurobiology translated into computational deep neural network models to find new answers to long-standing questions in cognitive science, linguistics and philosophy. Models replicating structural differences between human and non-human primate brains are applied to delineate mechanisms underlying specifically human cognitive capacities. Key experiments validate critical model predictions and new neurophysiological data will be applied to further improve the biologically-constrained networks.

Prof Pulvermüller leads the research group for Neuroscience of Language and Pragmatics at the Department of Philosophy, Freie Universität Berlin. He is PI at the Berlin School of Mind and Brain and the Einstein Center of Neurosciences Berlin.
Similar to standard neural network research, we propose to simulate cognitive processes with networks composed of elements that are functionally similar to the nerve cells that process information in the brain. However, structure and function of standard neural networks still suffer from dissimilarities to nerve cell networks in the human brain. Our strategy is to make neural networks as similar to the brain as possible in order to approximate the mechanistic basis of cognition. We call this approach brain-constrained neural modelling. Here are key examples of this research so far performed:

Foundations of brain-constrained neural modelling
The novel research strategy of the MatCo project and related work was introduced in a recent review paper in the journal Nature Reviews Neuroscience. We define 7 brain constraints important for making neural networks more neural, which address neuron model, local connectivity, learning mechanisms, inhibition-mediated regulation and control, anatomical area structure, and between-area connectivity, along with the multi-level nature of these. We show how brain-similarities of artificial networks enable inferences on mechanisms underlying cognition.

Explaining fast mapping of form and meaning
One of the most intriguing questions in the field of cognitive science, psychology and linguistics is how children can learn new words extremely rapidly. In contrast, most artificial neural networks need thousands of learning trials to link signs to meaning. We used neural networks with human-like connectivity and found that word-specific neural representations emerged after just a few learning events. This required that utterances and referent objects had first been encountered separately by the network. These findings are key to understanding the mechanisms of word meaning acquisition.

The role of experiences and symbols in building abstract concepts
Most mechanistic network models are still far from addressing complex cognitive processes such as abstract concept processing. In recent work, we show how structural differences between concrete and abstract concepts lead to different learning success in brain-constrained model simulations. Our results provide the first neurobiological explanation why children can learn concrete concepts from experience but not abstract ones. Furthermore, we report breakthrough insights about how language mechanistically influences and enables abstract concept formation.
Here are some of the key findings from the MatCo project that go beyond the state of the art:

• Several neurobiological constraints need to be applied to neural networks to make them more neurobiologically realistic. Such ‘brain-constrained’ neural networks are important tools for obtaining new clues about the mechanisms underlying language and thought.

• Brain-constrained neural networks can be used to neurobiologically explain a broad range of cognitive phenomena, ranging from the cortical areas crucial for semantic and conceptual processing to causal effects of language on perception and concept formation.

• The rapid learning of symbol-meaning links in infants can be explained by neural networks structured according to human cortical anatomy and realizing biologically realistic learning.

• The correlation structure of perceptual-semantic features of concrete and abstract concepts explains why brain-like networks governed by biologically realistic correlation learning build representations for concrete concepts but not for abstract ones.

• Language and symbol learning in combination with experiences is necessary for building representations for abstract symbols in brain-like networks. The key explanatory fact is that correlations, which are typically low between features of different conceptual instances, are significantly higher between conceptual instance features and symbols applicable to all conceptual instances. Correlation structure of perceptual-semantic features explains why brain-constrained networks, and indeed brains, build abstract concepts only with symbol support.

• Verbal working memory mechanisms are known to be crucial for why humans, but not apes or monkeys, can build huge vocabularies of tens and hundreds of thousands of words. The emergence of verbal working memory in human brains crucially depends on the specifically human large-scale between-area connectivity structure of that part of the human cortex which is most important for language. Brain-constrained modelling has helped to precisely define these anatomical connectivity features.

In future, further features of human cognition and language will be addressed. These include the combinatorial mechanisms emerging in brain-like networks and the learning of interactive communication through symbols and gestures during the first years of human life.

See www.fu-berlin.de/matco
Neuronal activity patterns for an action/object word (red/blue dots) in a brain constrained network