Skip to main content
European Commission logo print header

Using Embodied Cognition to Create the Next Generations of Body-based User Interfaces

Periodic Reporting for period 4 - BODY-UI (Using Embodied Cognition to Create the Next Generations of Body-based User Interfaces)

Periodo di rendicontazione: 2019-11-01 al 2020-04-30

Recent advances in user interfaces (UIs) allow users to interact with computers using only their body, so-called body-based UIs. Instead of moving a mouse or tapping a touch surface, people can use whole-body movements to navigate in games, gesture in mid-air to interact with large displays, or scratch their forearm to control a mobile phone. This project aims at establishing the scientific foundation for the next generations of body-based UIs, drawing in particular on embodied cognition as a theoretical framework. Embodied cognition suggests that thinking (including reasoning, memory, and emotion) is shaped by our bodies, and conversely, that our bodies reflect thinking. This project aims at establishing the scientific foundation for the next generations of body-based UIs. The main novelty in my approach is to use results and methods from research on embodied cognition.Embodied cognition suggests that thinking (including reasoning, memory, and emotion) is shaped by our bodies, and conversely, that our bodies reflect thinking. We use embodied cognition to study how body-based UIs affect users and to increase our understanding of similarities and differences to device-based input. From those studies, we develop new body-based UIs, both for input (e.g. gestures in mid-air) and output (e.g. stimulating users’ muscles to move their fingers), and evaluate users’ experience of interacting through their bodies. We also show how models, evaluation criteria, and design principles in HCI need to be adapted for embodied cognition and body-based UIs.
We have made several achievements with respect to understanding the relation between body-based user interfaces and thinking (O1/WP1). We have shown that affect may be detected using sensors in commodity smartphones with no training time based on a link between affect and movement (Mottelson and Hornbæk, 2016). We used a similar experimental paradigm to show that more complex psychological phenomena such as lying may also be predicted from movements (Mottelsen, Knibbe & Hornbæk, 2018). In an award-winning experiment (Jansen & Hornbæk, 2018), we investigated the idea that body postures, such as expansive and constrictive postures, may affect people’s emotion and cognition. Based on Bayesian analyses we find that incidental power poses are less relevant in HCI than measures of physical or cognitive comfort. Finally, we have explored the relation between motion of the body and perception of haptic information extensively (e.g. Strohmeier & Hornbæk, 2017; Strohmeier, Boring & Hornbæk, 2018). The key is our novel design of non-visual interfaces that engage with the body without requiring visual metaphors.

The second strand of work has concerned the design and prototyping of new and better body-based input (O2/WP2). We have characterized how users placed and recalled 30 items on the hand and forearm (Bergstrom-Lehtovirta et al., 2017). We have also focused on augmented and virtual reality because these technologies use the users’ body extensively and because they allow us to modify the appearance of the users’ body (by changing their representation in VR, their avatar). We have developed an approach to adapt an avatar iteratively (McIntosh et al., 2020), becoming more effective at the task after a number of iterations.

Part of the work in the project has concerned body-based output, corresponding to objective O3/WP3. This part of the work has in focused on electrical muscle stimulation. We have developed BodyIO, a prototype wearable sleeve of 60 electrodes for asynchronous electromyography (EMG) and electrical muscle stimulation (EMS); see Knibbe et al. (2017). BodyIO enables the creation of multi-channel, multi-electrode stimulation patterns enabling a wide range of wrist, hand, and finger gestures through one device. BodyIO also explores the interplay of reading and writing bio-electrical signals.

EMS is not the only modality we have explored. In particular, we have devised a system called Magnetips (McIntosh et al. 2019), that enabled users to use their fingertip as an input to the device, in a three-dimensional space around the device. To achieve this, we affixed a magnet to the fingertip, and used magnetic sensing to approximate the position of the finger, relative to the device.

The project has also been concerned with how to evaluate body-based user interfaces, corresponding to project objective O4 and workpackage WP4. The most important result here is that we have introduced a measure for tool extension in HCI by using a visual-tactile interference paradigm (Bergström et al. 2019). In this paradigm, an index of tool extension is given by response time differences between crossmodally congruent and incongruent stimuli; tactile on the hand and visual on the tool. We use this measure to show that findings on tool extension apply to interaction with computer-based tools.

Finally, our work has led to some broader ideas about what human-computer interaction is and the role of body-based user interfaces in that field (O5 and WP5). We have developed a meta-scientific account of HCI research as problem-solving (Oulasvirta & Hornbæk, 2016). We argued that most HCI research is about three main types of problem: empirical, conceptual, and constructive. We elaborated upon Laudan’s concept of problem-solving capacity as a universal criterion for determining the progress of solutions (outcomes).

Second, we have applied the problem-solving view to the concept of interaction (Hornbæk & Oulasvirta 2017); field-defining, yet curiously confused and under defined. We argued that attempts to directly define interaction have not produced agreeable results. Still, we have extracted from the literature distinct and highly developed concepts, for instance describing interaction as dialogue, transmission, optimal behavior, embodiment, and tool use. These has helped us think about the scope of interaction and what good interaction is.
The aim of this project is to establish the scientific foundation for creating the next generations of body-based user interfaces. We seek to understand how using the body to control computers changes the way users think and to demonstrate body-based user interfaces that offer new and better forms of interaction with computers. The long-term vision is to establish a device-less, body-based paradigm for using computers. So far we have achieved (a) a demonstration of several effects of the link between body and thinking; these might have implications for the design of future computing systems; (b) design guidance for doing skin-based input as well as empirical results about the effectiveness of skin input; (c) an electric muscle-stimulation prototype which demonstrate the feasibility and effectiveness of body-based output; and (d) a form of active haptic feedback, with potential applications in virtual reality and gesture interfaces.
BODYIO prototype