Community Research and Development Information Service - CORDIS

ERC

BODY-UI Report Summary

Project ID: 648785
Funded under: H2020-EU.1.1.

Periodic Reporting for period 1 - BODY-UI (Using Embodied Cognition to Create the Next Generations of Body-based User Interfaces)

Reporting period: 2015-05-01 to 2016-10-31

Summary of the context and overall objectives of the project

Recent advances in user interfaces (UIs) allow users to interact with computers using only their body, so-called body-based UIs. Instead of moving a mouse or tapping a touch surface, people can use whole-body movements to navigate in games, gesture in mid-air to interact with large displays, or scratch their forearm to control a mobile phone. This project aims at establishing the scientific foundation for the next generations of body-based UIs, drawing in particular on embodied cognition as a theoretical framework. Embodied cognition suggests that thinking (including reasoning, memory, and emotion) is shaped by our bodies, and conversely, that our bodies reflect thinking. This project aims at establishing the scientific foundation for the next generations of body-based UIs. The main novelty in my approach is to use results and methods from research on embodied cognition.
Embodied cognition suggest that thinking (including reasoning, memory, and emotion) is shaped by our bodies, and conversely, that our bodies reflect thinking. We use embodied cognition to study how body-based UIs affect users, and to increase our understanding of similarities and differences to device-based input. From those studies we develop new body-based UIs, both for input (e.g., gestures in mid-air) and output (e.g., stimulating users’ muscles to move their fingers), and evaluate users’ experience of interacting through their bodies. We also show how models, evaluation criteria, and design principles in HCI need to be adapted for embodied cognition and body-based UIs.

Work performed from the beginning of the project to the end of the period covered by the report and main results achieved so far

The main results are in five areas. First, we have explored the relation between the body and thinking. Based on the literature on embodied cognition it was hypothesized that body movements could be used to infer the affect of computer users and, conversely, that affect might be induced by particular body postures. We have used sensors in commodity smartphones to estimate affect in the wild with no training time based on a link between affect and movement. A first experiment had 55 participants do touch interactions after exposure to positive or neutral emotion-eliciting films; negative affect resulted in faster but less precise interactions, in addition to differences in rotation and acceleration. Using off-the-shelf machine learning algorithms, we report 89.1% accuracy in binary affective classification, grouping participants by their self-assessments. A follow up experiment validated findings from the first experiment; the experiment collected naturally occurring affect of 127 participants, who again did touch interactions. We have further explore the influence of posture on users’ thinking. The initial observation was that input and output devices are diversifying beyond keyboards and mice, leading to more variation in incidental body postures during interaction. To investigate, we ran two studies where we imposed two types of user posture through interface design: (1) we asked 44 participants to tap areas on a wall-sized display and measured their self-reported sense of power; (2) we asked 80 participants to play a game on a large touchscreen designed to measure their willingness to take risks. Neither experiment produced evidence for differences between incidental body postures. However, further analysis of our data indicates that individual personality traits might interact with the hypothesized effect.

Second, we have explored skin input. The use of the skin as interaction surface is gaining popularity in the HCI community, in part because the human skin provides an ample, always-on surface for input to smart watches, mobile phones, and remote displays. We have conducted an in-depth study in which participants placed and recalled 30 items on the hand and forearm. We found that participants recalled item locations by using a variety of landmarks, personal associations, and semantic groupings. Although participants most frequently used anatomical landmarks (e.g., fingers, joints, and nails), recall rates were higher for items placed on personal landmarks, including veins and pigments. We further found that personal associations between items and skin properties on a given location (e.g., soft areas or tattoos) improved recall. To offer an alternative perspective on how we might design on-body interactions, we conducted a questionnaire asking if, how, and why people mark their skin. We found that visibility and ease of access were important factors for choosing to mark the body. We also found that while some participants consider marking the body as a private activity, most participants perceive such markings as a public display. This tension between the personal nature of on-body interaction and the skin as a public display, as well as hedonic uses of body markings, present interesting design challenges

Third, we have explored body-based output through electrical muscle stimulation (EMS). Reading signals from muscles and writing signals to muscles has recently received much attention in HCI. We developed a prototype wearable sleeve of 60 electrodes for asynchronous electromyography (EMG) and electrical muscle stimulation (EMS). It enables the creation of multi-channel, multi-electrode stimulation patterns enabling a wide range of wrist, hand, and finger gestures through one device. BodyIO also explores the interplay of reading and writing bio-electrical signals. This prototype lays the foundation for using multi-electrode stimulation patterns for increased resolution; and the combination of EMG and EMS for auto-calibration.

Fourth, we have explored haptic feedback, in particular based on the belief (prominent in work on embodied cognition) that action and perception is tightly linked. Usually, vibrotactile actuation is mainly used to deliver buzzing sensations. But if vibrotactile actuation is tightly coupled to users’ actions, it can be used to create much richer haptic experiences. To investigate how actuation parameters relate to haptic experiences, we built a physical slider with minimal native friction, a vibrotactile actuator and an integrated position sensor. By vibrating the slider as it is moved, we create an experience of texture between the sliding element and its track. We conducted a magnitude estimation experiment to map how granularity, amplitude and timbre relate to the experiences of roughness, adhesiveness, sharpness and bumpiness. We found that amplitude influences the strength of the perceived texture, while granularity and timbre can be used to shape distinct experiences.

Fifth, we have explored some broader ideas on what HCI is and how to understand the role of our research. As part of this, we have developed a meta-scientific account of HCI research as problem-solving. We build on the philosophy of Larry Laudan, who develops problem and solution as the foundational concepts of science. We argue that most HCI research is about three main types of problem: empirical, conceptual, and constructive. We elaborate upon Laudan’s concept of problem-solving capacity as a universal criterion for determining the progress of solutions (outcomes). This offers a rich, generative, and ‘discipline-free’ view of HCI and resolves some existing debates about what HCI is or should be. It may also help unify efforts across nominally disparate traditions in empirical research, theory, design, and engineering. We applied this view to the concept of interaction; field-defining, yet curiously confused and underdefined. We argued that attempts to directly define interaction have not produced agreeable results. Still, we extract from the literature distinct and highly developed concepts, for instance describing interaction as dialogue, transmission, optimal behavior, embodiment, and tool use. These help us think about the scope of interaction and what good interaction is. Importantly, our work has shown that these concepts are associated with different ways of construing the causal relationships between the human and the computer. Based on this discussion, we list desiderata for any future account of interaction, emphasizing the need to improve scope and specificity, to better account for the effects and agency that computers have in interaction, and to generate strong propositions about interaction.

Progress beyond the state of the art and expected potential impact (including the socio-economic impact and the wider societal implications of the project so far)

The aim of this project is to establish the scientific foundation for creating the next generations of body-based user interfaces. We seek to understand how using the body to control computers changes the way users think and to demonstrate body-based user interfaces that offer new and better forms of interaction with computers. The long-term vision is to establish a device-less, body-based paradigm for using computers. So far we have achieved (a) a demonstration of several effects of the link between body and thinking; these might have implications for the design of future computing systems; (b) design guidance for doing skin-based input as well as empirical results about the effectiveness of skin input; (c) an electric muscle-stimulation prototype which demonstrate the feasibility and effectiveness of body-based output; and (d) a form of active haptic feedback, with potential applications in virtual reality and gesture interfaces.
Record Number: 196494 / Last updated on: 2017-03-29