Periodic Reporting for period 1 - FromObjectsToScenes (Exploring the continuum between objects and scenes: The neural basis of object-constellation processing)
Reporting period: 2019-07-01 to 2021-06-30
By bringing advanced neurocognitive methods to bear on the question of how the brain processes object constellations (OCs), I aim to provide the first systematic investigation of how the brain represents familiar / meaningful configurations of objects. Results will shed new light on the statistical regularities the brain is specialised to exploit, and advance our understanding of the intermediary representational space between single-objects and whole-scenes. The project’s twin objectives are centred on revealing the neural representations of object constellations in both space and time.
The first key finding of the project is that different high-level dimensions along which objects relate to each other in natural environments can influence object perceptibility in an interactive manner. In a study published in Cerebral Cortex (Quek & Peelen, 2020), I demonstrated that a neural signal of contextual integration (i.e. the semantic relationship between object identities), is larger when the objects appear in the typical spatial configuration in which they are encountered in real life. This indicates that the spatial configuration of the objects facilitated the extraction of their contextual association – that is, that the two high level dimensions along which the objects can relate to each other are jointly encoded in the brain. This high level integration signal was evident approximately 320ms after stimulus onset (Fig 1). This is a novel finding that sharply constrains the degree to which theories of visual recognition might consider scenes and objects as dissociable stimulus classes.
A second key outcome is the production of a detailed neural timecourse for object constellation processing. Recording EEG while observers saw custom object constellation stimuli that were either arrayed to respect or violate their typical spatial positioning (Fig 2), I showed that the neural response to object constellations contains information about the configural properties of the display from around 80ms onward. Neural decoding analyses showed that spatial typicality information was maintained until around 700ms, only when displays appeared in an upright orientation. These data suggest that the visual system codes for the typicality of object arrangements in a way that outstrips merely low-level distinctions between typical and atypical displays, and that this signal arises around 300-400ms post stimulus onset. These data have now been presented in both European and international audiences at a variety of invited research presentations; a manuscript is shortly forthcoming.
A final key result of the project is the demonstration that the human visual system integrates information from concurrent objects to infer their real-world size, which here serves as a proxy for high-level understanding of a scene. Using custom object silhouette stimuli (related object pairs and single objects), I showed in a behavioural study that observers were significantly faster and more accurate in discerning real-world size for pairs of related objects than when they saw those same object elements presented alone. A corresponding EEG experiment in 41 observers indicated that the neural correlate of this effect (i.e. a neural representation of real world size) was present from around 200ms post stimulus onset (see figure 3). Crucially, this representation of real world size did not arise when the same objects appeared alone. Taken together, these results suggest that the visual system does combine information from multiple object sources to guide scene understanding, and that viewing two objects at once gives rise to a qualitatively different high-level representation than that evoked by the same objects viewed in isolation. A manuscript is currently in preparation for submission to the Journal of Neuroscience. Findings from this research package have already been disseminated at the 2020 European Conference for Visual Perception and the University of NSW Virtual Workshop for Expectation, Perception, & Cognition 2021.
The project has made a key contribution to scientific knowledge base by comprising one of the first investigations into the temporal dynamics that underlie visual processing of familiar configurations of objects (Packages 1-3). This represents a critical departure from the long tradition of studying the neural timecourse of isolated object processing, and from the handful of behavioural and fMRI-based studies that have examined whether integrative processing occurs for object pairs. Together, these temporal investigations have advanced our understanding of the conceptual regularities the brain is (and is not) sensitive to, paving the way for future research avenues to consider other higher-order relationships that could facilitate information ‘chunking’ in the service of processing efficiency. By expanding our knowledge of how the brain exploits statistical regularities to enhance processing efficiency, this project has advanced our view of how the visual system is organised to support high-level recognition.