Skip to main content
European Commission logo print header

HOW HUMANS ENCODE, REPRESENT AND USE BASIC SPATIAL INFORMATION IN PERCEPTION AND ACTION: BEHAVIORAL AND NEURAL EVIDENCE

Final Report Summary - MAPSPACE (HOW HUMANS ENCODE, REPRESENT AND USE BASIC SPATIAL INFORMATION IN PERCEPTION AND ACTION: BEHAVIORAL AND NEURAL EVIDENCE)

Final Report of the project “MapSPACE”
Title: How humans encode, represent and use basic spatial information in perception and action: behavioral and neural evidence.

Francesco Ruotolo (Marie Curie Fellow)

F.Ruotolo@uu.nl

Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
Laboratory of Cognitive Science and Immersive Virtual Reality, Second University of Naples, Naples, Italy

Albert Postma (Project Leader; Scientist in Charge)

a.postma@uu.nl

Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
Korsakov Center Slingedael, Rotterdam, The Netherlands

In order to deal with a wide range of daily tasks, people need to use spatial information about elements in the environment. For example, if we are looking for the car keys we need to remember “where” we left them the last time (e.g. on the desk), and if we decide to reach for and grasp them we need to specify “where” they are with respect to our body. These examples show that human beings commonly use two kinds of frames of reference to encode and mentally represent the locations of objects: an egocentric frame of reference that specifies where an object is with respect to the body and an allocentric frame of reference that specifies where an object is with respect to another one in the external world (Klatzky, 1998; Burgess, 2006). Moreover, the kind of spatial relation represented through an egocentric or an allocentric frame of reference can be defined as coordinate if it is based on a fine-grained metric code that allows for precise distance discriminations between objects’ positions, or categorical if a more abstract code is used (e.g. left\right; above\below) (Kosslyn, 1987). The combination of egocentric/allocentric frames of reference and of categorical/coordinate spatial relation gives rise to four kinds of spatial representations: egocentric coordinate (e.g. the chair is 1 meter from me), egocentric categorical (e.g. the chair is on my right), allocentric coordinate (e.g. the chair in 1 meter from the table), and allocentric categorical (e.g. the chair is on the left of the table).
The aim of the current Marie Curie project was to demonstrate that the way people use these basic spatial representations depends on the characteristics of the task at hand. But what are the most relevant characteristics of a task? - First, the kind of action: does the task require a movement of the body (e.g. reaching for the pen on the table) or not (e.g. verbally judging the distance between the church and the postal-office)?; – Second, the kind of elements we have to deal with: are they manipulable objects (e.g. 3D objects) or not (e.g. 2D figures); - Third, the time of the action: are we using the visual information available in front of us (e.g. reach now for the book on your right) or are we recovering memories of objects or places when the visual information is not available (e.g. remember where you left the key last time)?
In order to answer to all these questions eight behavioral experiments were carried out and a total of 192 participants were tested. In 4 experiments participants were asked to memorize the positions and the names of triads of 3D objects and 2D figures and immediately after or after five seconds they were asked to indicate by pointing (e.g. reaching a place with the index finger) where was the object/figure closest to them, where was the object/figure closest to another one, where was a specific object/figure with respect to them, where was a specific object/figure with respect to another one. Instead, in the other 4 experiments participants saw the same objects/figures but this time they responded verbally to four questions: what was the object/figure closest to them, what was the object/figure closest to another one, what was the object/figure on the right/left of another one, what was the object/figure on the right/left with respect to them. The main results from these experiments were that: 1) people are more accurate when have to indicate by pointing the locations of objects with respect to them than with respect to an external element; 2) people become more accurate in indicating spatial relationships among elements in the space especially when they have to recover the information from memory, and 2D figures and a verbal response are used. These results suggest that the four kind of spatial representations have different functions and maybe a different relevance for human beings. Indeed, the most relevant information for our daily life is about WHERE are and WHAT are the things we need and this would explain why we found that participants were particularly accurate in indicating the positions of objects with respect to their body. Instead, when asked about the relationships among elements in the space they were particularly accurate when abstract/invariant (i.e. right/left) and not metric spatial relations were required. As a matter of fact knowing the metric relationships among elements in the space is not so relevant for our survival whereas more abstract and invariant spatial information is more easy to memorize and then to use. For example, in order to navigate through the environment it is more convenient and easy for the brain to memorize just the abstract relations among the elements (e.g. at the church turn to the left, then when reach for the bank turn to the right) than metrically precise information (e.g. at the church after 315 meters turn to the left, then after 267 meters you will find the bank etc....).

After the behavioral experiments we carried out a 7Tesla fMRI study to investigate the neural bases of the above mentioned four spatial representations. Participants in this study saw two vertical bars below a horizontal bar and they had to judge if the vertical bars were at the same distance (i.e. metric encoding) with respect to them or with respect to the horizontal bar, and if the two vertical bars were on the same side or not with respect to them or with respect to the horizontal bar. A total of 17 participants were tested. In line with the functions hypothesized in the behavioral studies we observed a specific activation of the parietal lobe, particularly of the superior parietal lobe more right sighted, during the egocentric coordinate/metric judgments, whereas a bilateral activation of the superior parietal lobe was observed for egocentric categorical judgments. This is a very important result because it shows that, even if this was not a motor task, brain areas responsible for the control of movement were more active when the spatial information was referred to the body. Instead an involvement of the superior part of the hippocampus and some medio-temporal areas were more active during the allocentric categorical task with respect to the allocentric coordinate task. This means that medio-temporal areas are involved in the encoding of abstract relational spatial information of a configuration of elements in the space.

The above mentioned results are relevant for the clinical practice because they provide crucial neuroimaging evidence which serve to support assessment of visuospatial deficits within a neuropsychological diagnosis and rehabilitation context. Indeed, the test we proposed in the experiments will allow to diagnose more precise spatial dysfunctions, such as impairment in the allocentric categorical but not in allocentric coordinate encoding, or impairment in the egocentric encoding of metric but not abstract spatial relations. So these studies can be of relevance for all those agencies that offer assistance to elderly people, since one of the most important marker for dementia is the rapid decay of allocentric spatial information processing (Iachini et al., 2009).

Website of the project: www.ruotolofrancesco.net