European Commission logo
English English
CORDIS - EU research results
CORDIS

Curiosity and the Development of the Hidden Foundations of Cognition

Periodic Reporting for period 3 - FOUNDCOG (Curiosity and the Development of the Hidden Foundations of Cognition)

Reporting period: 2022-01-01 to 2023-06-30

Human infants initially develop slowly, compared with many other animals. We hypothesize that this slow early development lays the foundations for cognition, and is critical to the flexibility and generalisation that characterise human intelligence.

Our project will develop computational models of infants’ acquisition of early knowledge, using deep neural networks, which currently dominate machine learning. We will then test the predictions of the models, using neuroimaging of human infants with magnetic resonance imaging (MRI).

The goals are to understand the development of the human mind in healthy infants, and how it is disrupted by brain injury. We will recruit a cohort of healthy infants from the maternity ward and measure development longitudinally, at 2 and 9 months of age, using MRI and online testing. We will also recruit a second cohort of infants from the neonatal intensive care unit, who are at an elevated risk of developing cognitive, behavioural and social impairments later in life, and contrast their brain development at the same points during the first year.

The overall objectives are to develop the scientific understanding of development in the helpless period of infancy and to understand how this may be disrupted. This may lead to future possible interventions to reduce the risk of developmental impairments. Furthermore, understanding how human infants learn should inspire new directions for machine learning.
Our project has been running for 22 months and we have so far published a number of abstracts and conference papers.

*Self-supervised Babies
Developmental psychologists have shown that infants are learning many things in their first year. However, linguistic understanding is primitive until the end of the year, and so their learning must be "self-supervised", in that they can learn without being explicitly taught. At present, machines are mostly taught using hand-curated datasets, which are painstakingly labelled by humans. Self-supervised learning algorithms can potentially reduce the dependence on these datasets, and so are of great interest to the machine learning community.
In an arXiv preprint​, Lorijn Zaadnoordijk from the lab, and our collaborator Tarek Besold have reviewed the developmental psychology literature to identify potential "next big thing(s)" for this area of machine learning.
*Learning Semantics
Humans have a deep understanding of the world. When we recognise an object, we know what other things it is similar to and we can classify it as part of some superordinate category. This type of knowledge is called semantic knowledge. Cliona O'Doherty has been testing the idea that by observing the co-occurrences of objects in the world, infants could not just learn how to recognise things, but also learn about semantics. She has done this by setting up a computational model using a deep neural network.
Cliona O'Doherty will present SemanticCMC - improved semantic self-supervised learning with naturalistic temporal co-occurrences at the workshop Self-supervised learning: theory and practice at Neural Information Processing Systems (NeurIPS) 2020.
*How Can Random Networks Explain the Brain So Well?
A part of the brain called the inferotemporal (IT) cortex is critical for humans and other monkeys to visually recognise objects. Currently, deep neural networks are the best models of brain responses in the IT cortex of adults. It has been argued that this is because the visual features that deep neural networks learn for object recognition are the same as those IT uses. But, Anna Truzzi has been investigating a conundrum, which is that actually untrained (or random) deep neural networks also do a surprisingly good job of modelling IT activity.
Anna presented the paper "Convolutional Neural Networks as a Model of Visual Activity in The Brain: Greater Contribution of Architecture Than Learned Weights" at the workshop Bridging AI and Cognitive Science​ at the International Conference on Learning Representations (ICLR) 2020. She will also be presenting at the NeurIPS2020 workshop Shared Visual Representations in Humans and Machine Intelligence, with the title "Understanding CNNs as a model of the inferior temporal cortex: using mediation analysis to unpack the contribution of perceptual and semantic features in random and trained networks". This work is also directly relevant to neuroscientists, and was presented at the neuromatch 1.0 conference with the title "Are deep neural networks effective models of visual activity in the brain because of their architecture or training?".

References
O'Doherty, C. and Cusack, R. (2020) "SemanticCMC - improved semantic self-supervised learning with naturalistic temporal co-occurrences" Workshop: Self-supervised learning: theory and practice, NeurIPS.
Truzzi, A. and Cusack, R. (2020) "Convolutional Neural Networks as a Model of Visual Activity in The Brain: Greater Contribution of Architecture Than Learned Weights" Workshop: Bridging AI and Cognitive Science​, International Conference on Learning Representations.
Truzzi, A. and Cusack, R. (2020) "Are deep neural networks effective models of visual activity in the brain because of their architecture or training?" Neuromatch conference 1.0
Truzzi, A. and Cusack, R. (2020) "Understanding CNNs as a model of the inferior temporal cortex: using mediation analysis to unpack the contribution of perceptual and semantic features in random and trained networks" Workshop: Shared Visual Representations in Humans and Machine Intelligence, NeurIPS.
Zaadnoordijk, L, Besold, T.R. and Cusack, R. (2020) "The Next Big Thing(s) in Unsupervised Machine Learning: Five Lessons from Infant Learning" arxiv:2009.08497.
We are about to begin MRI of our two cohorts:
* The typically developing cohort will be the largest group of awake infants ever scanned with functional MRI.
* The higher-risk NICU cohort will be the first group of high-risk infants ever scanned with functional MRI

Our computational modelling aims to yield new directions in machine learning. Our work, in which we create networks to simulate infant learning, has already been accepted into the major machine learning meetings.
Dublin/project recruitment collage
PI mugshot