Skip to main content

Template 2.0: Depicting the picture in your head

Final Report Summary - TEMPLATE 2.0 (Template 2.0: Depicting the picture in your head)

Visual information comes to us in a vast parallel stream, representing more than 180 degrees of the world around us. For the human organism to be adaptive, it must make strategic selections from this input, on the basis of its goals, needs, and motivations. As we have only one pair of eyes, one pair of hands, and we can move our feet in only one direction, we need to selectively attend to the objects that we wish to act upon. To this end, the human cognitive system applies filters, or “templates”, to the perceptual input. For visual information, these templates are assumed to reside in visual working memory, from which they interact with visual processing by driving attention to relevant objects. This raises a number of questions, each of which have seen considerable progress towards being answered in this project.

The first question is whether observers actively prepare such templates prior to the expected visual information, or whether they react upon actual visual input. Using eye-tracking, electroencephalographic (EEG), and functional magnetic resonance (fMRI) techniques we have developed methods to measure the activation of templates in people’s minds, while they see nothing yet. Using these methods we could measure when such templates become active – indeed prior to visual processing, and strategically timed to the expected input. This shows that observers have considerable control over template activation. This was corroborated involvement of the frontal cortex, the brain’s control center, in initiating templates.
A second question is how observers switch templates when the task changes. While in the lab vision scientists typically study single tasks, in real life we typically perform sequences of visual actions, such as first finding a coffee place and then the train platform. In this line of research we uniquely investigated task sequences, where observers did two tasks in a row. Using EEG, we were able to track the mental swapping of templates in visual sensory cortex between two tasks. We again found that frontal cortical signals control this swap between templates, through oscillatory signals.

Third, we can ask how these templates are represented in the brain. When doing task sequences, we activate certain representations in memory that are important for the currently ongoing task (e.g. what coffee bars look like), but we should also remember things for later, but which do not drive our current perception (e.g. that we need to go to platform 5 later on). In other words, the brain needs to separate memory for the now and memory for the future. We looked at how brain activity differed depending on the purpose of the memory. We found that memory for prospective visual tasks is represented in diametrically opposite patterns of activity compared to memories for currently ongoing visual tasks. It appears as if the brain is momentarily suppressing memories it keeps for future use.

A fourth question is what the capacity of the attentional filtering system is. How many templates can be active at once? Can you look for two things at the same time? Our work has shown that this is severely limited, beyond the already limited capacity of visual memory. This results in costs in selecting the relevant visual information when the selection goal changes form one instance to the next. However, we also found evidence that such selection costs depend on how much control observers can exert over selection. Specifically, switch costs disappear when observers can freely choose what to look for.

Taken together, the project has revealed a highly dynamic, and highly flexible cognitive system that drives human visual attention.