Skip to main content
Przejdź do strony domowej Komisji Europejskiej (odnośnik otworzy się w nowym oknie)
polski polski
CORDIS - Wyniki badań wspieranych przez UE
CORDIS
Zawartość zarchiwizowana w dniu 2024-06-18

Multisensory integration in the cognitive representation of space

Final Report Summary - MULTISENSORYSPACE (Multisensory integration in the cognitive representation of space)

Remembering the location of things is a fundamental capacity of daily life. In our MULTISENSORYSPACE fellowship project, we systematically investigated human memory for the location of objects and events in a multisensory space. Since in the natural world many objects are characterised by multiple sensory attributes, object location memory is inherently multimodal. Our project investigated how multimodal landmarks contribute to the construction of a spatial representation of the environment. To contribute to this rather unexplored field, we conducted a series of original experiments in which we assessed how people encode and recall the position and the identity of visual, auditory and tactile objects.

Auditory localisation and auditory recognition: are they independent?

We studied working memory (WM) associations between the position and the identity of stimuli in the auditory domain. More specifically, we aimed at verifying if the association between 'what' features (timbre and pitch) and 'where' features such as the location of sound source are independently encoded or if they are automatically integrated into multi-featured auditory objects. We tested participant in a WM task in which one of the three features should be retained in memory for immediate recall when the variation in the other two dimensions were irrelevant for the task. Results show an interesting asymmetrical influence of both 'what' features on the encoding of sound location. Specifically, while task irrelevant pitch and timbre variations impaired both non-spatial and spatial encoding, task-irrelevant location changes did not affect neither timbre nor pitch encoding. Our findings indicates an asymmetrical association between 'what' (timbre or pitch) and 'where' (location). Task irrelevant changes in timbre or in pitch affect accuracy in the location task but not vice versa. We concluded that features pertaining to the identity of sounds are automatically processed even when it is not required by the task, while information about sound location can be filtered off from the memory representation when it is not relevant.

People: Franco Delogu, Mariangela Gravina, Tanja Nijboer, Albert Postma

Binding 'what' and 'where' in haptics

This study provides first evidence of binding between 'what' and 'where' information in WM for haptic stimuli. We studied the mechanisms of WM binding for the identity (texture) and the location in reaching space of haptically explored stimuli. In particular, by adapting research methods previously used in the visual and in the auditory domains, we tested when and how location and texture are integrated in multidimensional representations in WM and when and how they can be dissociated. In an old-new recognition task, blindfolded participants were presented in their reaching space with sequences of three haptic stimuli varying in texture and location. They were then required to judge if a single probe stimulus was previously included in the sequence. Recall was measured both in a condition in which both texture and location were relevant for the task (experiment 1) and in two conditions where only one feature must be recalled (experiment 2). Results showed that when both features were task-relevant, even if the association of location and texture was neither necessary nor required to perform the task, participants showed a recall advantage in conditions in which the location and the texture of the target probe was kept unaltered between encoding and recall. By contrast, when only one feature was task-relevant, the concurrent feature did not influence the recall of the target feature. We conclude that attention to feature binding is not necessary for the emergence of feature integration in haptic WM. For binding to take place, however, it is necessary to encode and maintain in memory both the identity and the location of items.

People: Franco Delogu, Wouter Bergman Tiest, Tanja Nijboer, Astrid Kappers, Albert Postma

Multisensory processing during spatial navigation

Finding your way is undoubtedly a necessity in everyday life. Although numerous studies have addressed navigation ability based on performance in purely visual tasks, hardly any have focused on the contribution of auditory sensory processing. In this study we specifically examined how both visual and auditory cues are used to navigate through virtual environments. The main goal is to assess to what extent auditory information (distal or proximal), in isolation and in combination with visual information, contributes to navigation ability. Virtual, interactive, three-dimensional mazes are used, consisting of rooms that were only discernable by visual cues, auditory cues, or a combination of visual and auditory cues. After memorising the environment with one set of cues, participants were placed at a random room in the maze with the same set of cues and were instructed to find their way to the exit. Results so far indicate that visual cues lead to better navigation performance, compared to auditory cues. Auditory information can be used to navigate through a virtual environment, but they do not contribute to performance when visual information is also available.

People: Ineke Van der Ham , Milan Van der Kuil, Franco Delogu

Spatial and temporal encoding in auditory and visual WM

Information about where and when events happened seem naturally linked to each other, but only few studies have investigated if and how they are associated in bound representation in WM. In this project we are interested in studying whether the location of items and their temporal order are jointly or independently encoded. We also test if spatiotemporal binding is influenced by the sensory modality of items. In a series of experiments, participants memorise the location and / or the serial order of five environmental sounds or pictures sequentially presented from different locations. Participants are then asked to recall either the item location or their order of presentation within the sequence. Attention during encoding is manipulated by contrasting blocks of trials in which participants were requested to encode only one feature to blocks of trials where they have to encode both features. Results show an interesting effect of modality. In addition, accuracy in the serial order recall is affected by the simultaneous encoding of item location, while the recall of item location is unaffected by the concurrent encoding of the serial order of items. In vision, by contrast, the accuracy in both the serial order and in the location tasks was worse in the dual encoding condition. So far we concluded that binding of serial order and location of items in WM is not automatic, and that the costs of their simultaneous maintenance are both task and modality-dependent.

People: Franco Delogu, Tanja Nijboer, Albert Postma

Franco Delogu, during the two years at UU also participated in other research projects besides MultisensorySpace. The most relevant one is a project om music and the brain.
People: Franco Delogu, Ineke van der Ham, C. Marie, G. Lampis, M. Olivetti Belardinelli, M. Besson, R. Brunetti, and A. D'Ausilio.
Moja broszura 0 0