Community Research and Development Information Service - CORDIS

ERC

BODY-UI Report Summary

Project ID: 648785
Funded under: H2020-EU.1.1.

Periodic Reporting for period 1 - BODY-UI (Using Embodied Cognition to Create the Next Generations of Body-based User Interfaces)

Reporting period: 2015-05-01 to 2016-10-31

Summary of the context and overall objectives of the project

Recent advances in user interfaces (UIs) allow users to interact with computers using only their body, so-called body-based UIs. Instead of moving a mouse or tapping a touch surface, people can use whole-body movements to navigate in games, gesture in mid-air to interact with large displays, or scratch their forearm to control a mobile phone. This project aims at establishing the scientific foundation for the next generations of body-based UIs, drawing in particular on embodied cognition as a theoretical framework. Embodied cognition suggests that thinking (including reasoning, memory, and emotion) is shaped by our bodies, and conversely, that our bodies reflect thinking. This project aims at establishing the scientific foundation for the next generations of body-based UIs. The main novelty in my approach is to use results and methods from research on embodied cognition.
Embodied cognition suggest that thinking (including reasoning, memory, and emotion) is shaped by our bodies, and conversely, that our bodies reflect thinking. We use embodied cognition to study how body-based
UIs affect users, and to increase our understanding of similarities and differences to device-based input. From those studies we develop new body-based UIs, both for input (e.g., gestures in mid-air) and
output (e.g., stimulating users’ muscles to move their fingers), and evaluate users’ experience of interacting through their bodies. We also show how models, evaluation criteria, and design principles in HCI need to be adapted for embodied cognition and body-based UIs.

Work performed from the beginning of the project to the end of the period covered by the report and main results achieved so far

The main results are in five areas. First, we have explored the relation between the body and thinking. Based on the literature on embodied cognition it was hypothesized that body movements could be used to infer the affect of computer users and, conversely, that affect might be induced by particular body postures. We have used sensors in commodity smartphones to estimate affect in the wild with no training time based on a link between affect and movement. A first experiment had 55 participants do touch interactions after exposure to positive or neutral emotion-eliciting films; negative affect resulted in faster but less precise interactions, in addition to differences in rotation and acceleration. Using off-the-shelf machine learning algorithms, we report 89.1% accuracy in binary affective classification, grouping participants by their self-assessments. A follow up experiment validated findings from the first experiment; the experiment collected naturally occurring affect of 127 participants, who again did touch interactions. We have further explore the influence of posture on users’ thinking. The initial observation was that input and output devices are diversifying beyond keyboards and mice, leading to more variation in incidental body postures during interaction. To investigate, we ran two studies where we imposed two types of user posture through interface design: (1) we asked 44 participants to tap areas on a wall-sized display and measured their self-reported sense of power; (2) we asked 80 participants to play a game on a large touchscreen designed to measure their willingness to take risks. Neither experiment produced evidence for differences between incidental body postures. However, further analysis of our data indicates that individual personality traits might interact with the hypothesized effect.

Second, we have explored skin input. The use of the skin as interaction surface is gaining popularity in the HCI community, in part because the human skin provides an ample, always-on surface for input to smart watches, mobile phones, and remote displays. We have conducted an in-depth study in which participants placed and recalled 30 items on the hand and forearm. We found that participants recalled item locations by using a variety of landmarks, personal associations, and semantic groupings. Although participants most frequently used anatomical landmarks (e.g., fingers, joints, and nails), recall rates were higher for items placed on personal landmarks, including veins and pigments. We further found that personal associations between items and skin properties on a given location (e.g., soft areas or tattoos) improved recall. To offer an alternative perspective on how we might design on-body interactions, we conducted a questionnaire asking if, how, and why people mark their skin. We found that visibility and ease of access were important factors for choosing to mark the body. We also found that while some participants consider marking the body as a private activity, most participants perceive such markings as a public display. This tension between the personal nature of on-body interaction and the skin as a public display, as well as hedonic uses of body markings, present interesting design challenges

Third, we have explored body-based output through electrical muscle stimulation (EMS). Reading signals from muscles and writing signals to muscles has recently received much attention in HCI. We developed a prototype wearable sleeve of 60 electrodes for asynchronous electromyography (EMG) and electrical muscle stimulation (EMS). It enables the creation of multi-channel, multi-electrode stimulation patterns enabling a wide range of wrist, hand, and finger gestures through one device. BodyIO also explores the interplay of reading and writing bio-electrical signals. This prototype lays the foundation for using multi-electrode stimulation patterns for increased resolution; and the combination of EMG and EMS for auto-cal

Progress beyond the state of the art and expected potential impact (including the socio-economic impact and the wider societal implications of the project so far)

The aim of this project is to establish the scientific foundation for creating the next generations of body-based user interfaces. We seek to understand how using the body to control computers changes the way users
think and to demonstrate body-based user interfaces that offer new and better forms of interaction with computers. The long-term vision is to establish a device-less, body-based paradigm for using computers. So far we have achieved (a) a demonstration of several effects of the link between body and thinking; these might have implications for the design of future computing systems; (b) design guidance for doing skin-based input as well as empirical results about the effectiveness of skin input; (c) an electric muscle-stimulation prototype which demonstrate the feasibility and effectiveness of body-based output; and (d) a form of active haptic feedback, with potential applications in virtual reality and gesture interfaces.
Follow us on: RSS Facebook Twitter YouTube Managed by the EU Publications Office Top