CORDIS - EU research results
CORDIS

Emerging Visual Expectation in the Brain

Periodic Reporting for period 2 - EXPECTATION (Emerging Visual Expectation in the Brain)

Reporting period: 2020-01-01 to 2020-12-31

Because sensory input is often ambiguous, the brain combines this input with prior expectations to disambiguate sensory information and make a reasonable guess about what is perceived in the world. The objectives for this project were to study the impact of prior expectations on basic visual perception. The main conclusions from the project were that (1) prior expectations (i.e. statistical dependencies between the color and orientation of titled lines) need to be learned over a timescale that exceeds what can reasonably be done during a laboratory experiment. There was no evidence of statistical learning after 12 days, and it is possible that these regularities are too complex to be learned at all. (2) When statistical regularities were introduced in a behaviorally irrelevant (but disruptive) stimulus, healthy human subjects did not use this information to improve their behavior on the relevant task (by cancelling out the disruptive but expected information).

A new line of research looked at how concurrent visual inputs and visual memories can be represented in the brain. The main conclusions of this project were that (1) Early visual cortex is involved in representing both externally viewed information, as well as internally maintained information. (2) The brain uses multiple cortical regions, and multiple formats, to store remembered information. Early visual areas are involved in a more pictorial manner, while "higher-level" areas represent remembered information in a more abstract manner.
Two of the three major hypotheses in the grant have been explored in-depth. The first aim was to learn more about the time course over which visual expectations are built up. In some cases, a lifetime of visual experience is used to interpret visual information in a certain way. One example is the "hollow face illusion" (figure 1), where prior experience makes a face always appear convex, even if in reality it is concave (hollow). But visual experience can be built over shorter time scales as well. For example, a radiologist looking at an x-ray must exploit his or her training about what visual attributes are expected in the presence of a tumor in order to save someone’s life. In the lab we mimicked this shorter timescale, by having lines of certain colors (for example blue or red) be associated with certain orientations (for example vertical or horizontal). There were 4 colors and 4 orientations, and the associations were far from perfect (for example, "red" was horizontal half of the time, but the other half of the time it could be one of the other three orientations). The question was how people learn these relatively complex associations, and how does does it help the speed and accuracy of responses. Two different versions of this experiment were performed, and research participants did not learn these associations even after 12 days (doing the task 2 hours / day). This implies that learning relatively loose associations probably takes a lot longer than can be reasonably tested in the lab. The second aim was to learn more about visual expectations when visual information is not directly relevant to behavior. For example, when driving in the rain, visual expectations about the rain blowing across your field of view from the upper-left to the lower-right could support the important task of suppressing this irrelevant information, and focus on the cars in front of you so you may return home safely. In the lab we mimicked this situation by again associating the colors and orientations of lines that we showed on a screen, except that this time the lines were irrelevant to the task. The lines just made it harder for people to focus on other items that they needed to attend. Even when very obvious associations were presented (for example red lines were always vertical), people were not able to exploit this regularity (i.e. suppressing the irrelevant information) to become better at the task at hand (focus on the relevant information).

All the methods, data, code, results, and findings are public. Null results are self-published (https://sites.google.com/site/rosannerademaker/research/filedrawer). Published work performed is available on the Open Science Framework (OSF). The work performed under this grant has been presented at the following occasions:
2020.02 Talk at the Max Planck Gesellschaft (at the invitation of Patricia Drück), Berlin, Germany
2020.01 Talk at the European Institute of Neuroscience (at the invitation of Mathias Bähr), Göttingen, Germany
2020.01 Talk at the Institute of Ophthalmology, University College London (at the invitation of Andrew Dick), United Kingdom
2019.12 Talkat the Stuttgart Center for Simulation Science (at the invitation of Prof. Dr. Thomas Ertl), Computer Science department, University of Stuttgart, Germany
2019.07 Workshop on “Dynamics and limitations of working memory” (at the invitation of Albert Compte & Zachary Kilpatrick), Annual Organization for Computational Neurosciences (OCNS) Meeting, Barcelona, Spain
2019.06 Colloquium talk on visual working memory, Royal Netherlands Academy of Arts and Sciences (at the invitation of Stefan van der Stigchel & Chris Olivers), Amsterdam, the Netherlands
2019.06 Masterclass on encoding models (at the invitation of Stefan van der Stigchel & Chris Olivers), Amsterdam, the Netherlands
2019.06 Talk at the Netherlands Institute for Neuroscience (at the invitation of Pieter Roelfsema), Amsterdam, the Netherlands

and at the following conferences: Society for Neuroscience Meeting 2017; Vision Sciences Society 2018 & 2019; Annual Meeting of the Society for Psychophysiological Research 2018; European Conference on Visual Perception 2018.
Since the main aims of the proposed research proved hard to achieve, the research focus shifted to understanding how visual images that are directly perceived with the eyes can coexist with visual images that are held only in the mind. On the one hand, when you see someone's face in front of you, the experience is quite different from when you are only recalling that same face. On the other hand, perception and memory also have a lot in common. For example, you might be able to easily conjure up the face of a friend, and keep in mind a lot of details about what their eyes look like or that typical expression they make right bursting into laughter - even if your friend is nowhere to be seen. In this line of work, one paper has been published where we used fMRI to measure responses from human brains (Figure 2). Here we showed that the part of cortex known to process visual inputs from the eyes, is also involved when only a memory of something is actively being recalled (Figure 3). This novel finding has spurred questions of representation in the brain, and how real (the outside world) can be dissociated from thoughts (the world inside our heads). This work may have important implications for learning about hallucinations.
figureforreport.png