CORDIS - Risultati della ricerca dell’UE
CORDIS

GeoViSense: Towards a transdisciplinary human sensor science of human visuo-spatial decision making with geographic information displays

Periodic Reporting for period 3 - GeoViSense (GeoViSense: Towards a transdisciplinary human sensor science of human visuo-spatial decision making with geographic information displays)

Periodo di rendicontazione: 2020-11-01 al 2022-04-30

Geographic information displays (GIDs) such as mobile maps with navigation guidance have become an integral part of our everyday life. When seeking directions towards a new restaurant or the quickest connection to the office, people will retrieve their smartphones and navigate towards the destination almost effortlessly. However, does the ease with which we can now find our way through assistance come with a price? Without GIDs, are we more likely to become lost than previous generations?

“GeoViSense” attempts to answer these questions by studying people’s navigation behavior in real and virtual environments, in various situations and contexts. While behavior in virtual environments is easier to control and predict, studies outdoors are rare, because they are much harder to control, but these allow us to determine the extent to which our predictions generalize to everyday situations. These two aspects are both necessary for deeper understanding of human spatial behavior and GIDs. Mobile physiological devices, e.g. galvanic skin response (GSR) sensors, eye trackers (ET), electroencephalograms (EEG), etc. can be validated in a relatively controlled virtual reality setup, but only recently even conceivable, also to investigate in-situ real world behavior.

The GeoViSense research team is also interested in questions at the forefront of science and society more generally, e.g. what is the relationship between technology and human abilities; the adaptation to and of technology to fit specific needs, and the personalization of technology for specific types of users and use contexts. How do people respond to stress during navigation by focusing on a limited amount of information that is easily available? Is there some way to adapt the GID to account for this change in the person’s behavior? Using both physiological and behavioral measurements, we will investigate the extent to which a user is stressed or engaged during navigation, how to adapt the GID accordingly during this behavior, and how the adaptive use of GIDs is affecting human spatial ability and spatial learning.

GeoViSense objectives
.. we integrate human-visualization-environment research across the sciences (i.e. natural, social/behavioral, engineering, etc.)
.. we develop missing, empirically evaluated design guidelines for human-computer interfaces to support pedestrian mobility in urban environments, including affective, effective, and efficient spatio-temporal decision-making and spatial learning
.. we develop unconventional evaluation methods to assess perceptual, cognitive, psycho-physiological, and display design factors across broad ranges of users and mobile urban use contexts, and
.. we aim to scale up empirical methods, from to-date controlled behavioral lab paradigms, towards a new in-situ mobile human sensor science.
The first year of this highly interdisciplinary and high-risk project, spanning the engineering, the natural, and the social sciences, was spent recruiting researchers who are not only capable, but also willing to work at the fringes of the many participating sciences to solve the challenges posed by this project. We succeeded in assembling an exciting research team joining backgrounds from geography, GIScience, geomatics, cartography, neuroscience, cognitive science, psychology, and engineering.

Only this allowed us during the second year and thus far to effectively assemble and build the necessary technical infrastructure, to design and develop the mobile and VR testing procedures, and to develop the required code for specialized human sensing hard- and software (i.e. EEG, GSR, virtual reality (VR) displays, mobile maps on navigation assistance, etc.) to be deployed in our planned VR and outdoor experiments. Testbeds set up and project code is made accessible in an open science manner, as it becomes available ,and is ready for distribution.

During the second year we consolidated our unique research team and completed two outdoor navigation studies under difficult weather and COVID-19 condition with hard to recruit Swiss military personell. We conducted and participated in several international and interdisciplinary workshops to raise, co-develop, and share agenda-setting research questions, approaches, experimental designs, testing materials, etc., and to bring together various researchers across the cognate disciplines including potential stakeholders, interested in our research outcomes. We started to disseminate our first results in mostly open proceedings and journal publications.
We have developed a unique experimental setup, capitalizing on ambulatory human sensing methods for outdoor navigation studies. Exploiting GISystems and technology, we designed online, interactive mobile map interfaces with 2D and 3D displays, deployed on open Android-technology, with GPS to track movement and map interaction behavior, including ET, EEG, and GSR to study spatial knowledge acquisition (i.e. incidental spatial learning), cognitive load, visual attention, and affect in-situ during navigation .

We are in the process of finalizing the set up of an empirical VR CAVE testbed for initial pilot testing and deployment in developed VR experiment to study pedestrian navigation in a controlled lab setting. For these studies we collect EEG data to study navigators’ cognitive load, and to feed this data stream back into the VR display, in real-time, for in-situ geographic information adaptation. We also developed VR-based post-test spatial memory assessment methods, thus extending the tested and proven 2D paper-and-pencil instruments now also for deployment in VR studies. We implemented this in the open Unity framework, based on C#, and this will be available in an open science repository, once properly tested and validated. Empirical data collection in the VR setting was delayed due to COVID-19, and is scheduled to start during summer 2020, under heightened COVID-19 health and safety conditions.

Our aim is to assess
.. how integrating landmark information on mobile map display can improve individuals’ spatial memory of the environment, assessed with judgements of relative direction (JRDs) tasks.
.. the relationship between the numbers of landmarks, shown to be relevant for spatial learning, and thus to be displayed on a mobile map, with navigators’ cognitive load for wayfinding using EEG.
.. how spatial abilities and working-memory-span might influence cognitive load for landmark learning during navigation, and how this would affect spatial memory, and
.. an optimal level of visual realism (i.e. image, drawing, sketch, etc.) for landmarks depiction on mobile navigation devices so that it supports, 1) effective and efficient navigation performance (i.e. improves task accuracy and reduces completion time), 2) mitigates pedestrians' cognitive load, and 3) supports spatial knowledge acquisition during navigation, even when under stress.

We plan to derive empirically validated design guidelines for assistive navigation interfaces that support affective, effective, and efficient wayfinding support in unknown urban environments, while helping navigators to retain their own spatial abilities and capacities for spatial learning. These very important aspects to maintain human independence in an increasingly mobile information society.
GeoViSense: Indoor VR and outdoor mobile human sensing during assisted navigation and wayfinding