Skip to main content
Go to the home page of the European Commission (opens in new window)
English English
CORDIS - EU research results
CORDIS
CORDIS Web 30th anniversary CORDIS Web 30th anniversary

Exploring the continuum between objects and scenes: The neural basis of object-constellation processing

Periodic Reporting for period 1 - FromObjectsToScenes (Exploring the continuum between objects and scenes: The neural basis of object-constellation processing)

Reporting period: 2019-07-01 to 2021-06-30

Where much is known about the partially distinct neural bases of single-object and whole-scene processing, neural investigations of stimuli along the continuum between these two anchorpoints have been rare. Yet there is a conflict here with our daily experiences, where we rarely encounter objects in complete isolation (e.g. a lone fork). Instead, we regularly find objects as part of a group whose multiple components bear conceptual relevance to each other (e.g. a fork and knife either side of a plate). Despite the importance of such ‘object constellations’ for human behaviour, we presently know remarkably little about the neural representation of them in the brain. The goal of the project is to address this substantial in our understanding of the intermediate representational space between objects and scenes.
By bringing advanced neurocognitive methods to bear on the question of how the brain processes object constellations (OCs), I aim to provide the first systematic investigation of how the brain represents familiar / meaningful configurations of objects. Results will shed new light on the statistical regularities the brain is specialised to exploit, and advance our understanding of the intermediary representational space between single-objects and whole-scenes. The project’s twin objectives are centred on revealing the neural representations of object constellations in both space and time.
Over the course of the project’s duration, I have conducted a systematic investigation into the temporal dynamics underlying visual processing of familiar constellations of objects. Since the project’s commencement, I have designed, conducted, and delivered four large-scale electroencephalography (EEG) studies designed to probe the neural representation of object constellations. In practice, this amounts to collecting continuous brain data recordings for over 150 research participants, together with corresponding behavioural response data. Together, these temporal investigations have significantly advanced our understanding of the human brain’s sensitivity to statistical regularities in the visual environment, augmenting our view of the ways in which the visual system is organised to support efficient high-level recognition. A summary of several key findings from this research is detailed below:
The first key finding of the project is that different high-level dimensions along which objects relate to each other in natural environments can influence object perceptibility in an interactive manner. In a study published in Cerebral Cortex (Quek & Peelen, 2020), I demonstrated that a neural signal of contextual integration (i.e. the semantic relationship between object identities), is larger when the objects appear in the typical spatial configuration in which they are encountered in real life. This indicates that the spatial configuration of the objects facilitated the extraction of their contextual association – that is, that the two high level dimensions along which the objects can relate to each other are jointly encoded in the brain. This high level integration signal was evident approximately 320ms after stimulus onset (Fig 1). This is a novel finding that sharply constrains the degree to which theories of visual recognition might consider scenes and objects as dissociable stimulus classes.
A second key outcome is the production of a detailed neural timecourse for object constellation processing. Recording EEG while observers saw custom object constellation stimuli that were either arrayed to respect or violate their typical spatial positioning (Fig 2), I showed that the neural response to object constellations contains information about the configural properties of the display from around 80ms onward. Neural decoding analyses showed that spatial typicality information was maintained until around 700ms, only when displays appeared in an upright orientation. These data suggest that the visual system codes for the typicality of object arrangements in a way that outstrips merely low-level distinctions between typical and atypical displays, and that this signal arises around 300-400ms post stimulus onset. These data have now been presented in both European and international audiences at a variety of invited research presentations; a manuscript is shortly forthcoming.

A final key result of the project is the demonstration that the human visual system integrates information from concurrent objects to infer their real-world size, which here serves as a proxy for high-level understanding of a scene. Using custom object silhouette stimuli (related object pairs and single objects), I showed in a behavioural study that observers were significantly faster and more accurate in discerning real-world size for pairs of related objects than when they saw those same object elements presented alone. A corresponding EEG experiment in 41 observers indicated that the neural correlate of this effect (i.e. a neural representation of real world size) was present from around 200ms post stimulus onset (see figure 3). Crucially, this representation of real world size did not arise when the same objects appeared alone. Taken together, these results suggest that the visual system does combine information from multiple object sources to guide scene understanding, and that viewing two objects at once gives rise to a qualitatively different high-level representation than that evoked by the same objects viewed in isolation. A manuscript is currently in preparation for submission to the Journal of Neuroscience. Findings from this research package have already been disseminated at the 2020 European Conference for Visual Perception and the University of NSW Virtual Workshop for Expectation, Perception, & Cognition 2021.
Despite their ubiquity in daily life, until recently we knew very little about how the brain represents object constellations (e.g. a knife and fork either side of a plate). Instead, extant research has focused on the neural basis for processing individual objects, or else whole scenes – resulting in a substantial gap in our understanding of the intermediate representational space between these two endpoints. This project targets this gap by systematically investigating the under-explored possibility of a neural continuum between object- and scene perception.

The project has made a key contribution to scientific knowledge base by comprising one of the first investigations into the temporal dynamics that underlie visual processing of familiar configurations of objects (Packages 1-3). This represents a critical departure from the long tradition of studying the neural timecourse of isolated object processing, and from the handful of behavioural and fMRI-based studies that have examined whether integrative processing occurs for object pairs. Together, these temporal investigations have advanced our understanding of the conceptual regularities the brain is (and is not) sensitive to, paving the way for future research avenues to consider other higher-order relationships that could facilitate information ‘chunking’ in the service of processing efficiency. By expanding our knowledge of how the brain exploits statistical regularities to enhance processing efficiency, this project has advanced our view of how the visual system is organised to support high-level recognition.
fig-3-real-world-size-representation-over-time.png
fig-1-spatial-relation-processing-over-time.png
fig-2-example-object-constellation-variants.png