Skip to main content
Aller à la page d’accueil de la Commission européenne (s’ouvre dans une nouvelle fenêtre)
français français
CORDIS - Résultats de la recherche de l’UE
CORDIS

Generalization in Mind and Machine

Periodic Reporting for period 4 - M and M (Generalization in Mind and Machine)

Période du rapport: 2022-03-01 au 2024-02-29

There is widespread interest in deep neural networks (DNNs) that are making impressive strides in solving difficult tasks. This raises the question as to whether these networks are solving these problems in a human-like manner and thus tell us something important about the human brain. This project compares the performance of DNNs to humans across a range of domains. We are specifically concerned with question of how well these models generalize in the domain of vision, memory, problem solving, and language, and comparing performance to humans.

The objectives are:

1) Compare the performance of networks and humans in a range of cognitive domains to assess whether these models solve problems in a human-like way.

2) Compare the internal representations that networks and humans use to solve various tasks.

3) Carry out behavioural studies that assess human generalization and compare how well neural networks support human performance.

4) Add various cognitive and biological constraints to model to make performance more human-like. This will not only be relevant understanding humans, but also useful for engineers and computer scientists who are only concerned with network performance (regardless of whether the models are similar to humans).

5) Compare DNNs to models that implement symbolic computations in order to assess the importance of symbols (if any) to human minds and machine learning.

6) Develop a new benchmark test ("MindSet: Vision") to assess how well models explain the psychology of vision. These will be made available to other research teams so they can easily assess the psychological plausibility of their models in the domain of vision.
It is widely claimed that deep neural networks (DNNs) provide us key new insights into how the brain operates in various domains, including vision, language, reasoning, and memory. However, these claims are made with almost no reference to psychological research. In this project we have carried out many studies that assess how well DNNs capture psychological findings across a range of domains and have shown these models often perform poorly, challenging the many strong conclusions that have been made. This work was summarized in a recent target and response article published in Behavioral and Brain Science (Bowers et al., 2023ab). Apart from conferences and invited talks, our work has be covered in the press, I was recently interviewed on the popular podcast “Brain Inspired” (https://www.youtube.com/watch?v=Nen4ifJpZUs(s’ouvre dans une nouvelle fenêtre)) and Noam Chomsky highlighted my work concerning language in a recent New York Times article (https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt-ai.html?searchResultPosition=1(s’ouvre dans une nouvelle fenêtre))


1) In the domain of vision we have achieved the following:

We have shown how DNNs do not encode objects by their parts and the relations between the parts. This contrasts with humans who explicitly encode object parts and their relations (Malhotra et al., 2023).

We have shown that DNNs encode the wrong sort of features when classifying objects (Malhotra et al., 2022).

We have shown that the adversarial images that fool networks do no fool humans in a similar way, contrary to some high-profile claims (Dujmović et al., 2020).

We have found that DNN do not support a range of Gestalt organizational principles that are central to human perception and object recognition (Biscione & Bowers, 2023).

We have shown that that Representational Similarity Analysis, is problematic in the way it is used in the literature (Dujmović et al., 2024).

We have shown that the human visual system supports on-line translation invariance (Blything et al., 2020, 2021), and improving the training environment of DNNs can help models support key properties of human vision, including translation and other invariances (Biscione & Bowers, 2021, 2022).

When we add a biological constraint to convolutional neural networks (adding ‘edge detectors’ much like simple cells in visual cortex) then the behave in a more human-like way, and do not pick up on single-pixels to identify objects (Evans et al., 2022; Malhotra et al., 2020; Tsvetkov et a., 2023).

We have developed a new benchmark for testing DNNs against key psychological findings in the domain of vision (Biscione et al., 2024).



2) In the domain of language, we have achieved the following:

We have found that models of spoken word identification show some similarities to human speech, but they also differ in fundamental ways (Adolfi et al., 2022).

We have shown that DNNs do a good job in accounting for a range of key psychological result in visual word identification (Yin et al., 2023).

We have shown that large language models can learn “impossible” languages unlike any human language and that would be difficult if not impossible to learn (Mitchell & Bowers, 2020).

We have shown that language models do not capture some fundamental syntactic generalizations that humans do, such as Principle-C (Mitchell et al., 2019).

We have developed a new empirical method to assess how well models of word naming can generalize to novel words (Gubian et al., 2022)


In domain of reasoning we have achieved the following:

We have shown that standard DNNs that have been claimed to support same/different visual reasoning in fact fail when tested appropriately (Puebla & Bowers, 2022, 2024).

We have found that networks that learn disentangled representations continue to fail in combinatorial generalization tasks (Montero et al., 2021, 2022).

We have developed new network architectures that support more widespread generalizations than standard models (Mitchell & Bowers, 2021; Vankov & Bowers, 2019).

We have shown that feed-forward networks that adapt at multiple loci outside the weights are better able to support simple logical problems (Habashy et al., 2024).
Our research programme advances on this state-of-the-art in two key ways. First, we have highlighted how psychological research needs to play a much more central role in developing DNN models of brains. Second, we have highlighted a methodological problem with current practice, namely, correlations do not imply causation, and good predictions do not imply two systems share similar mechanisms. Indeed, we have provided detailed examples in which good predictions are obtained from a model that is designed to be non-human like (Dujmović et al., in press). In order to advance the field, we have argued that the research community needs to carry out controlled experiments that manipulate independent variables in order to make claims regarding DNN-brain alignment. We have recently published a new benchmark dataset called "MindSet: Vision" that is designed to make it easy for researchers to test their models on key psychological experiments in vision where the images have been manipulated in systematics ways to test specific hypotheses regarding vision and object recognition (Biscione et al., 2024). I discussed the methodological problems with the current approach to evaluating and building models at an invited workshop talk at NeurIPS (2022) entitled:"Researchers Comparing DNNs to Brains Need to Adopt Standard Methods of Science" that can be watched here: https://nips.cc/virtual/2022/63150(s’ouvre dans une nouvelle fenêtre). We have highlighted the problems with current methods and advocated for a research program focused on manipulating independent variables in high-profile journals (Bowers et al., 2023a,b,c).
Poster by Blything et al. at Cognitive Science (2019)
Photo of research group on away day
Poster by Malotra et al. at Cognitive Science (2019)
Poster by Llera Montoro et al. UK Neural Computation (2019)
Mon livret 0 0