Periodic Reporting for period 1 - SEEING FROM CONTEXT (The neural basis of visual interaction between scenes and objects)
Okres sprawozdawczy: 2016-06-01 do 2018-05-31
2. When do contextually-induced neural representation of objects and scenes emerge? The second part of SEEING FROM CONTEXT aimed to reveal how long it takes, from the moment we see the visual scene, to generate a representation of a feature that is contextually defined by the complementary stream. Thus, We used MEG with a similar paradigm as in the fMRI, to test the effects of scenes on the time-course of object representation (n=25) and the effects of objects on the time-course of scene representation (n=28). We found that in both cases, interactive processes peaked at around 320 ms after visual onset (Brandman and Peelen, 2017, J Neurosci; Brandman and Peelen, in preparation). This timing is 100 ms later than peak representation of intact isolated objects. Taken together with the fMRI results, this suggests a longer route for interactive scene-object processing, in which visual information is processed along both pathways and then projected onto the complementary pathway, resulting in a delayed sharpening of the representation.
These data, together with online behavioral data collected for these experiments (n>100), have been presented in international conferences around the world, and published in top journals of the field. We are currently working towards promoting a new approach to the dual-pathway concept of object and scene processing, via a review article. Our findings also gave rise to two follow-up questions, leading to two additional studies. In one study we asked whether scene-object contextual integration is an automatic process, or whether it is gated by attention. This was tested in the MEG using a similar paradigm as in the previous experiments, with an additional manipulation of attention, and is currently under analysis. In a second study we asked whether object representations are shaped not only by contextual visual information, but also by external non-visual information. We therefore tested the effects of auditory and semantic input on visual representation of objects in the MEG. We found that both words and natural sounds facilitated the representations of objects, and that words were more effective facilitators than natural sounds, implying that they engage separate routes in facilitation of visual perception.