Skip to main content
Vai all'homepage della Commissione europea (si apre in una nuova finestra)
italiano italiano
CORDIS - Risultati della ricerca dell’UE
CORDIS

The role of layer-specific population receptive field properties in visual recurrent processing

Periodic Reporting for period 1 - FFvsFB-UHF-fMRI (The role of layer-specific population receptive field properties in visual recurrent processing)

Periodo di rendicontazione: 2022-10-01 al 2024-09-30

The brain is constantly presented with large amounts of ambiguous information: a given pattern of sensory signals can arise from many different real-world conditions. Yet, despite this ambiguity, the brain rapidly parses incoming signals into coherent and stable percepts. How does the brain resolve the inherent uncertainty of the world around? A prominent theory proposes that sensory information is combined with expectations originating from a mental model of the environment, which is built and refined through experience. Vision can thus be understood as an inference process, where higher-order cognition guides the interpretation of sensory input. For instance, a Mooney face – a two-tone, minimally informative depiction of a face – is often unrecognizable until exposure to the original undistorted image enables identification. This ability to resolve sensory ambiguity using prior knowledge highlights the crucial role of feedback in shaping visual perception. Every feedforward sensory pathway is paralleled by a reciprocal feedback projection, yet the distinct roles and mechanisms of feedforward and feedback processing remain poorly understood.
A key organizing principle of the visual system is retinotopy, in which visual space is topographically mapped from the retina onto the cortex. Receptive fields (RFs) serve as the fundamental unit of this organization, traditionally viewed as fixed spatial filters. However, evidence from animal studies suggests that RFs are dynamic and modulated by feedback, making them a strong candidate for integrating feedforward sensory input with top-down expectations. This project investigated RFs as a mechanism for feedforward-feedback integration, focusing on how spatial context and prior knowledge influence early visual processing.
Using ultra-high field (UHF) fMRI with population receptive field (pRF) mapping, this work examined cortical layer-specific processing in the human brain. Complementary psychophysical experiments explored two expectation-related modulations: spatial context, studied via visual crowding, and prior knowledge, examined using Mooney images. By elucidating how feedforward and feedback signals are integrated in the brain, this work advances basic research in cognitive neuroscience while also informing applications in technology and healthcare. Potential impacts include improvements in biologically inspired neural network architectures, better diagnostic tools for conditions involving abnormal recurrent processing (such as schizophrenia or autism spectrum disorder), and novel strategies for vision rehabilitation.
The project consisted of three interrelated investigations, each focusing on different aspects of pRF size modulation across cortical layers. The modalities employed included behavioral testing and neuroimaging – UHF MRI, which enables submillimeter-resolution imaging. Since different cortical layers predominantly receive input from either feedforward or feedback channels, this approach allowed for the dissociation of their respective contributions to pRF properties.

The first investigation examined how pRF size varies across cortical depth in early visual areas. Participants viewed a standard pRF mapping stimulus (a bar sweeping across the visual field) while fMRI data was acquired. A state-of-the-art analysis workflow was developed, resulting in a Python package for cortical surface-based pRF modeling. This study successfully replicated previous findings on depth-dependent pRF size variation in primary visual cortex (V1) and extended them to additional visual areas, providing critical validation for UHF fMRI and advancing knowledge of feedforward and feedback retinotopic coding. This work led to several international conference presentations and a manuscript (in preparation), with the Python package validated on multiple datasets, including standard resolution MRI data, resulting in an additional manuscript (submitted, preprint available online).

The second investigation explored how spatial context modulates pRF properties using the visual crowding paradigm. In crowding, a target’s features become indistinguishable when flanked by nearby objects, an effect linked to feature pooling within RFs in early visual areas. Surprisingly, adding more flankers can improve target discrimination (“uncrowding”), implicating feedback from higher visual areas responsible for the processing of global form. A novel psychophysics paradigm was developed to investigate the roles of local and global spatial context in target feature identification. Participants judged the tilt of a target grating relative to flankers across conditions manipulating local and global spatial context. Results showed that smaller tilt offsets reduced performance (decreased accuracy, longer reaction times), while additional flankers improved performance. Task-dependent pRF mapping with the crowding stimulus, synchronized with fMRI acquisition, allowed investigation of pRF size changes under different spatial context conditions across cortical depth in early visual areas. Additionally, multivariate pattern analysis was employed to determine at which stage in the visual hierarchy target tilt decoding breaks down. This study extended the first by investigating cortical depth-dependent pRF size profiles as a function of local and global spatial context, further supporting RFs as a key mechanism in feedforward-feedback integration.

The third investigation examined how prior knowledge influences pRF properties using Mooney images. Mooney images are difficult to recognize until exposure to the original images from which they are derived enables identification. A novel Mooney image set was developed, spanning eight object categories (e.g. human faces, animals, tools). An online experiment (using Gorilla and Prolific) assessed recognition performance pre- and post-disambiguation. Results showed that prior object knowledge significantly improves recognition, with category-dependent variations. Human faces were relatively identifiable pre-disambiguation, showing minor improvement, whereas nonhuman categories (e.g. animal bodies) saw substantial performance gains. Further analysis tested the effects of repeated exposures to Mooney images before and after disambiguation. While repeated pre-disambiguation exposure had minimal benefits, prior knowledge had a pronounced effect, and multiple post-disambiguation exposures did not further improve performance. These findings highlight the dominance of top-down processing in object recognition. The image dataset is being prepared for open-access publication. As in the second investigation, the paradigm was adapted for task-dependent pRF mapping, with Mooney images presented in synchrony with fMRI acquisition. The goal here was to conduct feature-based pRF modeling to examine how prior knowledge influences RF properties across cortical depth and visual hierarchy. Together, these investigations provide novel insights into the dynamic nature of RFs and their role in integrating feedforward and feedback signals in the human visual system.
Beyond empirical research, the project contributed to theoretical neuroscience. Inspired by a summer school, a publication was written on computational complexity barriers in brain-behavior mapping, emphasizing the need for philosophical frameworks in neuroscience.
This project makes substantial contributions to the state-of-the-art. First, it advances layer-specific fMRI, offering validation and methodological innovations with open-science practices promoting broad adoption. Second, it leverages cutting-edge imaging techniques to elucidate how spatial context and prior knowledge modulate RFs, refining our understanding of feedforward and feedback pathways. Finally, by integrating neuroscience, psychophysics, and computational modeling, this work presents an interdisciplinary approach to understanding recurrent processing in the brain, with implications for artificial vision systems, clinical diagnostics, and neurorehabilitation.