Skip to main content
Aller à la page d’accueil de la Commission européenne (s’ouvre dans une nouvelle fenêtre)
français français
CORDIS - Résultats de la recherche de l’UE
CORDIS

Personalized priors: How individual differences in internal models explain idiosyncrasies in natural vision

Periodic Reporting for period 1 - PEP (Personalized priors: How individual differences in internal models explain idiosyncrasies in natural vision)

Période du rapport: 2023-01-01 au 2025-06-30

The brain is widely viewed as a predictive system. In the context of perception, this view posits that the brain conceives the visual world by comparing sensory inputs to internally generated models of what the world should look like. Despite this emphasis on internal models, their key properties are not well understood. We currently do not know what exactly is contained in our internal models and how these contents vary systematically across individuals. PEP will address these critical knowledge gaps, by (i) developing techniques to quantify the contents of internal models and (ii) using information about these contents to capture individual differences in perception.
Focusing on natural vision, we will use creative drawing methods to characterize the contents of internal models. By analyzing drawings of real-world scenes, we will distill the contents of individual people’s internal models. These insights will form the basis for a comprehensive cognitive, neural, and computational investigation of natural vision on the individual level. The program is structured in four work packages: First, we will establish how individual differences in the contents of internal models explain the efficiency of scene vision, on the behavioral and neural levels (WP1). Second, we will harness variations in people’s drawings to determine the critical features of internal models that guide scene vision (WP2). Third, we will develop computational models capable of predicting scene perception on the individual level from participants’ drawings of scenes (WP3). Finally, we will systematically investigate how individual differences in internal models co-vary with individual differences in visual and linguistic experience, functional brain architecture, and scene exploration (WP4).
PEP will illuminate natural vision from a new angle – starting from a characterization of individual people’s internal models of the world. Through this change of perspective, we can make true progress in understanding what exactly is predicted in the predictive brain, and how differences in people’s predictions relate to differences in their perception of the world.
In the first two years, we have made substantial progress. Most importantly, we successfully established a suitable experimental routine for (i) obtaining drawings as descriptors of individual participants internal models of individual scenes and (ii) standardizing these drawings, either by converting them into hand-crafted 3D renders of scenes or by using computational models to automatically convert them into a photorealistic format.
We also made critical progress in each of the four work packages individually. In WP1, we already finished all the experimental work. Our results thus far demonstrate that individual differences in internal models predict differences in scene perception in categorization tasks and that these differences are accompanied by enhanced and idiosyncratic neural activations in visual cortex when inputs align well with internal models. In WP2, we established all key paradigms and are currently collecting behavioral data that will illuminate which dimensions are critical for comparing visual inputs to internal models of the world. In WP3, we have tested different computational methods for quantifying similarities among drawings, images, and across drawings and images, suitable for obtaining predictions for individual scene perception. In ongoing work, we are benchmarking these methods which larger datasets. In WP4, we have completed the first behavorial, fMRI, and eye-tracking experiments. The behavioral work shows that the contents of typical scene drawings can be modelled by combining measures of real-world visual scene statistics with linguistic similarities among object and scene concepts. We further show that inter-subject similarities in internal models predict inter-subject similarities in scene categorization and scene ratings on multiple dimensions. In ongoing analysis of fMRI and eye-tracking data, we are testing whether inter-subject similarities in internal models similarly predict inter-subject similarities in visual cortex activations and scene exploration patterns.
Our work goes beyond the state of the art in several ways. First, we have established a novel way of quantifying the contents of internal models, using drawings to project participants expectations about the structure of the world into a visible format. To utilize the information contained in drawings, we developed methods for standardizing these drawings into 3D renders or converting them into a photorealistic format. These methods allow us to readily use representations of the drawings as experimental stimuli and enables us to evaluate them with deep learning models. Second, the results obtained with these methods thus far invite a new way of thinking about individual differences in perception. In the absence of suitable predictors for individual differences in perceptual experiments or neuroimaging studies, researchers mostly treated inter-individual variation as noise. Our approach provides a straightforward way of predicting how perception should differ (starting from simple line drawings), thereby pushing the boundaries of how much variance we can explain in behavioral or neural data that differs across participants. Third, our work on the neural correlates of individual differences paves new ways for understanding predictive processing in cortex, as it delineates when and where representations change as a function of the internal models that individual participants hold. Finally, in the longer run, we expect that our findings will also have an impact on other domains: Our method can be adopted for studying alterations of predictive processing in the clinical context or the maturation of individual models across development.
Mon livret 0 0