Skip to main content

Abstraction and Generalisation in Human Decision-Making

Periodic Reporting for period 2 - NEUROABSTRACTION (Abstraction and Generalisation in Human Decision-Making)

Reporting period: 2019-01-01 to 2020-06-30

"Our project aims to understand how humans learn and make decisions. A specific focus of the project is understanding how humans learn abstract, conceptual information which describes how the world is structured. This sort of conceptual information can be especially useful when making decision in novel settings. For example, most people can navigate a foreign city where the language, coinage and customs are unfamiliar, because they understand concepts such as ""greeting"", ""taxi"" and ""map"". Our project addresses this using a mixture of behavioural experiments that measure human learning and decision-making, computer modellling that simulates the learning and decision process, and neural recording experiments (using fMRI) that help us understand how information is organised in neural circuits.

This project has major translational potential in two areas. The first is that despite exciting recent progress in artificial ingelligence (AI) research, machine learning researchers have struggled to build agents that learn abstractions or behave flexibly in novel settings. One possibility is that there is something special about how humans learn that allows them to acquire and generalise abstract knowledge. We aim to identify what this might be. Indeed, many of our computational simulations use deep neural networks, the tool of choice in contemporary machine learning, to simulate the learning process. However, unlike most computer scientists, we seek inspiration from neurobiology and cognitive science. A second translational outlet is education. By understanding how humans learn, we can gain insights into how to teach people to acquire information more efficiently and effectively. Our projects ask why, at the level of neural computation, humans learn better from some curricula than others. We are actively seeking opportunities to translate our work in this area.

The overall objectives are 1) to disclose new information about the cognitive mechanisms by which humans learn and make decisions, with a focus on abstract knowedge; to capture the processes by which this occurs in computational simulations involving neural networks; and to compare the representations formed by those neural networks to signals in the human brain, measured with fMRI. Our work seeks to establish and sustain a virtuous circle between psychology/neuroscience and AI/research for the mutual benefit of both fields."
In this phase of the project, we have build and tested a new theory of how abstractions are learned. Our theory is grounded in longstanding ideas in cognitive science, but differs in that it is (a) grounded in biologically realistic computational models, using layered networks of neurons; and (b) makes detailed proposals about the neural geometry that arises in such networks, and as such is able to provide an implementational theory of abstraction formation for neuroscience. The theory is supported by new empirical evidence (Luyckx et al 2019, Elife) as well as new work (on the brink of submission; Luyckx et al, in press; Flesch et al, in press) and has been partly described in a review article (Summerfield et al 2020, Prog. Neurobiology). The theory states that potentially high-dimensional information (e.g. visual or auditory signals) is projected onto a low-dimensional representational space in the dorsal stream, in which it is grounded in the actions that agents take with their effectors (in primates, their eyes and limbs). Generalisation between physically dissimilar inputs that obey a common relational structure occurs when information is projected into a common relational format in the parietal cortex. This is supported by the finding that (for example) after learning the reward probability of objects, they are coded with abstract signals measured from a magnitude comparison task involving symbolic number. New work shows that when information with common structure is learned at different times, then neural representations converge to a format that facilitates generalisation, using normalisation processes. Other behaviour/modelling/imaging projects that explore human learning of structured hierarchies, tasks composed of subtasks, and orthogonal tasks are also nearly ready for publication, and as far as we understand, can all be explained under the proposed theory.

This core work is supplemented by numerous projects (either detailed in the work-package, or growing from new ideas) that focus on reinforcement learning (Juechems et al Neuron 2019; Juechems et al TICS 2019; Juechems et al PNAS under revision), as well as major reviews (Saxe et al, under review Nature Reviews Neuroscience) and numerous additional projects that deal more directly with decision-making (Herce-Castanon, Nature Comms 2019; Cao et al, Neuron 2019; Luyckx et al, Cerebral Cortex 2020)
We think that we understand the neural geometry that supports structure learning. The next steps are 1) to valildate, replicate and extend this work using more complex an challenging problems; and 2) to seek translational avenues. We are actively engaging with AI researchers interested in our theory. We are try to develop analytic (exact) models that provide a deeper understanding of the link between network architectures and learning dynamics, with former postdoc Andew Saxe (now Wellcome Trust Henry Dale Fellow). We are exploring avenues for measuring the learning of abstractions (concept learning) in schools, at university admissions and in adult education. We hope that by the end of the project, we will have made major strides towards understanding how humans learn.