Skip to main content

Vision at a second glance - how memories interact with and depend on information processing in the human visual cortex

Periodic Reporting for period 1 - VisionAtSecondGlance (Vision at a second glance - how memories interact with and depend on information processing in the human visual cortex)

Reporting period: 2017-09-01 to 2019-08-31

Previous neuroscientific research has shown that the human brain can use previous experience to effectively process more image information with less neural activation in the visual cortex. How the human brain accomplished this feat remains a scientific puzzle, which solution could lead to important insights into how the brain integrates memories and current sensory experiences. Such insights would shed light on the fundamental principles of brain function and could inspire new approaches for developing brain-inspired AI systems. To realize this, we have investigated which neural mechanisms can best explain the fact that stimulus repetition reduces brain responses in the visual cortex – a phenomenon referred to as repetition suppression.

The second aspect of the research program relates to the phenomenon that we are better able to remember images when we actively process the meaning of the image – a mental process referred to as ‘deep encoding’. The ambition of this program is to unravel the so far unknown neural mechanisms of deep encoding and thereby shed light on how the integration of sensory information into existing knowledge aids the formation of long-lasting memories. This insight will lead to a better understanding of the fundamental processes underlying the formation of memories and could inspire the development of new techniques for boosting human memory function.
We used computational models to simulate how a range of possible neural mechanisms of repetition suppression would manifest themselves in brain response patterns measured with functional magnetic resonance imaging (fMRI). Subsequently, we determined which of these models best explained observed effects of stimulus repetition on brain responses. This research showed that recent visual experience with the same stimulus leads to a down-scaling of neuronal tuning curves, and that this effect is most profound in neurons selectively tuned to the repeated stimulus. The conceptual breakthrough realized by this study was communicated to the field via a publication in the high impact journal Nature Communications.

As a first step towards revealing the neural mechanisms of deep encoding of images, we have developed a method for measuring how similar images of objects are in terms of their meaning. This we have realized by determining how often words describing the depicted objects co-occur inside large text corpora (e.g. all text on Wikipedia). This enables us to assess the extent to which brain response patterns encode the meaning of the images we are looking at and to determine if this measure of deep encoding is enhanced for subsequently remembered images.

In addition, we have developed a behavioral paradigm that can efficiently measure how important thousands of low-level image features are for recognizing an image (e.g. recognizing images as depicting a cat or a dog, biorxiv, Alink & Charest, 2018). Intriguingly, the data recorded with this method revealed that individuals with a greater number of autistic traits rely more on fine-grained image details, which provides evidence for the idea that autism is related to an enhanced ‘eye for detail’.
Our computational approach for revealing the most likely underlying neural mechanisms of stimulus repetition related brain response attenuation enabled us to conclude that this effect is most likely driven by a down-scaling of neuronal tuning curves. This finding not only reveals a plausible neural mechanism for how experiences can reduce brain responses, but also has a methodological impact as it emphasizes the importance of formal modeling for bridging neuronal and fMRI levels of investigation. The research provides important insights into how the human brain uses previous experience to optimize perceptual processing - which in the future could translate into new approaches for improving AI systems.

In addition, this project has resulted in the development of two novel psychophysical techniques: one for quantifying the semantic relationship between images and the feature diagnosticity mapping technique (Alink & Charest, 2018) which enables one to reveal the low-level features most critically important for image recognition. The latter technique has provided the clinically relevant insight that neurotypical individuals with a greater number of autistic traits rely more on visual details when recognizing images – which aids our basic understanding of the autism spectrum disorder and could potentially inspire novel treatment strategies in the future.
fMRI paraigms