Periodic Reporting for period 1 - PEP (Personalized priors: How individual differences in internal models explain idiosyncrasies in natural vision)
Période du rapport: 2023-01-01 au 2025-06-30
Focusing on natural vision, we will use creative drawing methods to characterize the contents of internal models. By analyzing drawings of real-world scenes, we will distill the contents of individual people’s internal models. These insights will form the basis for a comprehensive cognitive, neural, and computational investigation of natural vision on the individual level. The program is structured in four work packages: First, we will establish how individual differences in the contents of internal models explain the efficiency of scene vision, on the behavioral and neural levels (WP1). Second, we will harness variations in people’s drawings to determine the critical features of internal models that guide scene vision (WP2). Third, we will develop computational models capable of predicting scene perception on the individual level from participants’ drawings of scenes (WP3). Finally, we will systematically investigate how individual differences in internal models co-vary with individual differences in visual and linguistic experience, functional brain architecture, and scene exploration (WP4).
PEP will illuminate natural vision from a new angle – starting from a characterization of individual people’s internal models of the world. Through this change of perspective, we can make true progress in understanding what exactly is predicted in the predictive brain, and how differences in people’s predictions relate to differences in their perception of the world.
We also made critical progress in each of the four work packages individually. In WP1, we already finished all the experimental work. Our results thus far demonstrate that individual differences in internal models predict differences in scene perception in categorization tasks and that these differences are accompanied by enhanced and idiosyncratic neural activations in visual cortex when inputs align well with internal models. In WP2, we established all key paradigms and are currently collecting behavioral data that will illuminate which dimensions are critical for comparing visual inputs to internal models of the world. In WP3, we have tested different computational methods for quantifying similarities among drawings, images, and across drawings and images, suitable for obtaining predictions for individual scene perception. In ongoing work, we are benchmarking these methods which larger datasets. In WP4, we have completed the first behavorial, fMRI, and eye-tracking experiments. The behavioral work shows that the contents of typical scene drawings can be modelled by combining measures of real-world visual scene statistics with linguistic similarities among object and scene concepts. We further show that inter-subject similarities in internal models predict inter-subject similarities in scene categorization and scene ratings on multiple dimensions. In ongoing analysis of fMRI and eye-tracking data, we are testing whether inter-subject similarities in internal models similarly predict inter-subject similarities in visual cortex activations and scene exploration patterns.