We easily categorize places and objects in a single glance, a computationally complex task presenting a central challenge for vision neuroscience. Considerable evidence points to a division of scene and object processing into two distinct neural pathways, relying on different types of visual cues. However, scenes and objects are also known to strongly interact in visual perception, as seen in contextual effects of background on object perception. At present, the neural mechanisms by which scenes and objects interact remain unknown, leaving a critical gap in our understanding of these two major visual paths. The main goal of this multi-method proposal is to uncover the neural mechanisms of scene-object interactions. I therefore propose three competing theoretical models. A parallel model predicts only stimulus-driven representations of scenes and objects in the visual cortex. In contrast, interactive models predict that representations of scenes and objects in the visual cortex are influenced by one-another. However, whereas a visual-interactive model suggests direct interaction, a feedback model suggests that the interaction is mediated by frontal regions. To test this, I propose a novel psychophysical paradigm of seeing objects from scene context and scenes from object context. Thereby, I will examine how scene and object processing are affected by one-another and identify the potential neural sources of these modulations using fMRI (objective 1). Thereafter, I will use MEG to decode the timeline of these neural processes (objective 2). Establishing a clear neurocognitive model for scene-object interaction would not only advance our understanding of the two central paths of the ventral visual stream, but also significantly contribute to the definition of vision as an interactive system rather than a set of specialized parallel modules. Shifting from localized visual modules to interactive visual processes will broaden my expertise as a cognitive neuroscientist.