Community Research and Development Information Service - CORDIS

Final Report Summary - COGNITSIMS (Simulating Brains: Cognition Grounded in the Simulation of Sensorimotor Processes in the Human Neocortex)

Grounded (embodied) cognition theory proposes that cognition is grounded in processing of sensorimotor information. The main proposal for how this information grounds cognition is mental simulation. This process has been suggested to depend primarily on hierarchically-organized, convergence-divergence zones in association cortex. However, no direct neural tests of mental simulation or this mechanism have been done. This project will test crucial timing predictions. By this account, processing proceeds sequentially from unimodal areas in one modality through multimodal association areas and then out to other unimodal areas. To test this, the project determined how and when processing sensorimotor features in the human brain affects the cortical dynamics as they unfold over time for visual object cognition. The researcher’s multiple-state interactive account of the cortical dynamics for visual cognition provided precise predictions about the timing and brain location of effects (Schendan and Maher, 2009; Schendan and Ganis, 2015). Such precise predictions allow the most powerful conclusions from neuroscience data. To integrate human and non-linguistic animal data in future computational models, studies focused on objects (i.e., not words) in the visual modality that dominates in humans. The project involved developing new behavioural paradigms to determine how visual and motor action features affect cognition and determining the brain basis of these processes using human neuroscience methods, including high-density, event-related potentials (ERPs), brain imaging, and lesion methods. Findings from this project for the first time revealed how, when, and how much processing modal features affects the temporal dynamics of cortical processes supporting visual object cognition. Such findings tested key timing predictions of the proposed cortical mechanism of how mental simulation grounds cognition in sensorimotor processing and determines how, when, and how much cognition is grounded.

The main specific project objectives were to determine how (a) current sensorimotor features facilitate or interfere with the cortical dynamics for visual object categorization and (b) prior priming of sensorimotor features affects the cortical dynamics for visual object categorization and delayed, surprise, episodic object recognition. Overall, findings so far support embodied cognition theory, reveal the brain basis of its mental simulation mechanism, but also define limiting conditions, regarding which features are embodied and task specificity. Performance results indicate that color, shape, and motion as well as motor action affect visual object cognition, but size has little or no effect. Event-related cortical electrical potentials (ERPs) defined when and how much visual and motor features affect cognition, including meaning. The ERP time course indicates that mental simulation of visual and motor features occurs automatically between 200 and 400 ms after seeing a picture, indexed by an N3 complex, whereas strategic mental simulation occurs later between 400 and 700 ms, indexed by a late positive component (LPC). Other ERPs reveal other processes by which visual and motor features embody object cognition.

Color affects affects object categorization. When the color of an object matches prior experience with that category (is congruent, e.g., green frog), then object categorization is faster and more accurate relative to when color does not match (is incongruent, e.g., red frog). A limiting condition, however, is that color congruency is found primarily for categories of objects for which color is diagnostic feature of the category (e.g., frog) and minimally or none for objects for which color is nondiagnostic (e.g., car); color is diagnostic when it is one of the top 3 features associated with the category. ERPs show that color congruency affects object knowledge after the initial feedforward pass through posterior cortical areas that process objects, which is after about 200 ms. At this time, top-down feedback inputs modulate knowledge and contribute to two states of mental simulation of sensorimotor features, earlier automatic simulation from 200 to 400 ms and later strategic simulation 400 to 700 ms. Critically, the only ERP that is related to performance and occurs early enough to reflect processes that influence performance is the N3 complex index of automatic mental simulation. The N3 is smaller for congruous than incongruous colors of diagnostic more than nondiagnostic objects. A parietal P2/P3 complex from 200 to 400 ms and the LPC index of strategic mental simulation after 600 ms, show congruency effects, being more positive for incongruent than congruent objects, regardless of diagnosticity. Thus color knowledge affects object meaning and category decisions at multiple processing times, affecting different cognitive functions, during top-down interactive feedback cortical activity. Critically, N3 and performance results show that color facilitates when congruent but interferes when incongruent with object knowledge. This provides strong and clear support for an embodied cognition view of object knowledge and meaning.

Regarding the visual feature of size, six behavioural experiments using similar methods to the studies of color found little or no evidence that size knowledge affects object category and size decisions. Thus a limiting condition for embodied cognition is that size knowledge has little or no effect on object cognition.

The visual feature of shape was investigated in several neurophysiological studies and results so far support embodied cognition and reveal the cortical dynamics of mental simulation of visual shapes of objects. One experiment showed that automatic simulation of visual shape happens between 200 and 400 ms and strategic mental simulation happens between 400 and 700 ms (Schendan and Ganis, 2012). Another experiment showed that, when parts of the shape are missing, the visible shapes enable mental simulation between 200 and 400 ms that enable objects to be categorized even under highly degraded or visually impoverished conditions (Schendan and Ganis, 2015). Two experiments indicate that posterior cortex stores knowledge and implicit memory about the spatial orientation of an object and supports mental simulation of object orientation. Four experiments indicate that priming (by reading sentences about) shape and visual features as opposed to nonvisual features (e.g., taste, smell, sound) affects object cognition immediately and even after a long time. Automatic mental simulation of shape and visual features affects different object cognition processes that occur at different processing times and reflect activation of semantic memory and episodic memory. Findings indicate that automatic mental simulation of visual versus nonvisual features of objects is encoded into memory and improves later episodic recollection. This provides strong evidence that automatic mental simulation of visual features occurs during sentence reading, and these mentally simulated visual features are encoded into episodic memory. Finally, findings from several other experiments indicate that mental simulation during object cognition and reading involves top-down activation of visual features and involves word knowledge minimally if at all.

Four neurophysiological experiments investigated the role of motor action features. Results so far indicate motor action also embodies implicit learning and memory. For example, object categorization is affected not only by spatial orientation, a visual feature, but also response-related processes, and more by priming of motor action than the decision aspects of the response. Two experiments investigated the role of multiple sensorimotor features, specifically, object orientation, motion, and motor action, using a representational momentum paradigm. Results so far indicate mental simulation of motion and motor action affects object cognition early in processing.

In summary, research so far has advanced beyond the state of the art in the field when the project began in several ways. ERP findings have identified different times when automatic and strategic mental simulation of visual features and motor action happen during object cognition. Overall, across all experiments, the brain sources differ between automatic mental simulation of color, shape features, and these and other visual features vs. nonvisual (other sensory modalities) features, and this is consistent with different modality-specific processes underlying each, as predicted by embodied cognition theory. While color and motor action are sensorimotor features that affect object categorization, size has little or no effect, at least for automatic simulation. The findings define when, how much, and under what conditions visual and motor features affect visual object cognition, including meaning. Altogether, this provides essential information for future tests of the brain basis of grounded cognition theory and of vision, memory, and thinking. The impact of this research has been to define limiting conditions under which embodiment effects occur and pinpoint when and where in the human brain sensorimotor processing affects high level cognition and meaning. A key outcome is definition of the cortical dynamics that embody cognition, including meaning, using mental simulation of modal processing of shape, color, motion, and motor action features. This understanding of the brain basis of mental simulation, sensorimotor processes, meaning and memory will inform education, technological development, creativity and innovation, and health goals.

Contact

John Martin, (Research Advisor)
Tel.: +44 01752 588931
E-mail
Record Number: 192337 / Last updated on: 2016-12-08
Follow us on: RSS Facebook Twitter YouTube Managed by the EU Publications Office Top