Community Research and Development Information Service - CORDIS

H2020

MAMEM Report Summary

Project ID: 644780
Funded under: H2020-EU.2.1.1.4.

Periodic Reporting for period 1 - MAMEM (Multimedia Authoring and Management using your Eyes and Mind)

Reporting period: 2015-05-01 to 2016-10-31

Summary of the context and overall objectives of the project

Traditionally, human computer interaction has been grounded on the principle of a healthy neuromuscular system allowing access to conventional interface channels like mouse, keyboard, etc. Recently, in an effort to make human computer interaction more natural other types of control mechanisms have been brought to the forefront of interest, such as touch-screens or gesture-based interfaces, or speech-driven interfaces. However, the potential of these interfaces is also limited by the pre-requisite of a healthy neuromuscular system in the case of gesturers, or the use of an application that can be easily operated through spoken commands. It has been only recently that the evolution of devices recording accurate information about eye movements and brain electrical signals has given a new perspective on the control channels that can be utilized for interacting with a computer. The necessity of using these alternative channels has been mainly motivated in the context of assisting people with disabilities. MAMEM’s overarching goal is to integrate people with disabilities back into society by endowing them with the critical skill of managing and authoring multimedia content using novel and more natural interface channels. These channels will be controlled by eye-movements and mental commands, significantly increasing the potential for communication and exchange in leisure (e.g. social networks) and non-leisure context (e.g. workplace).

Work performed from the beginning of the project to the end of the period covered by the report and main results achieved so far

The work performed during the reporting period has taken place along the following axes: a) Study the requirements of our end-users, so as to determine the functional requirements of MAMEM platform, b) Define a set of Personas coupled with persuasion and compliance strategies so as to drive the design of our novel application interfaces, c) Install and optimize the sensor infrastructure for acquiring reliable bio-signals (i.e. EEG, eye-tracking and GSR), d) Design and implement MAMEM’s architecture towards an on-line BCI system, e) Develop early prototypes of a BCI-enhanced web browser, and f) Decide on the methodology and indicators for quantifying changes in the social integration level of our end-users.

- In studying the requirements of our end-users, two rounds of studies have been performed by MAMEM’s clinical partners: (1) Literature review and focus groups and; (2) using questionnaires for end users and their care users.
- In defining a set of Personas coupled with the appropriate persuasion strategies we have worked along two parallel tasks: a) evaluating the patient group’s attributes, needs and habits, as obtained from the user groups studies, b) defining MAMEM’s persuasion strategy.
- In setting-up the infrastructure all relevant partners were equipped with the necessary EEG, Eye-tracking and GSR devices so as to have identical settings across all sites. A number of administrative obstacles had to be removed that was achieved with the help of MAMEM's project officer.
- In designing the architecture for on-line BCI applications we have combined four different layers: (1) sensors; (2) middleware; (3) interaction and; (4) applications. The additional challenges that we had to address included the requirement for acquiring signals in a synchronized manner, as well as the fact that the entire platform would have to operate in an on-line mode.
- For eye-tracking based interaction we acquire a user’s eye gaze information in real-time that is used to generate gaze events, and to analyze the data to deduce more high-level events. In detecting these events, we have relied on accumulated dwell time selection, task specific threshold, object size, position, and several other interface optimization algorithms for eye tracking signals.
- For EEG-based interaction we have replicated the state-of-the-art results using our own sensor infrastructure and through our own experimental process. Our emphasis has been placed in three directions: a) Steady-state-visual-evoked potentials (SSVEPs),b) EEG-based BCIs that rely on motor-imagery, and c) Error-Related-Potentials.
- In designing the prototype applications for novel interaction we have decided that the most generalizable and scalable solution would be to rely on a container application like a web browser. The great advantage of this option is that MAMEM back-end would only have to interface with one front-end framework, that would be possible to host many different kinds of applications (e.g. browsing, photo editing, social media sharing, messaging, multimedia reproduction, etc).
- In realising the multi-modal interaction we have investigated ways on how to combine the different modalities we have set the objective of using the EEG signals to compensate for the shortcomings of the eye tracker based interaction. More specifically, in order to alleviate the Midas Touch project which is common in gaze-based interaction, the EEG signals can be used to switch between reading and navigation modes. On another approach, EEG can offer an easy and natural solution for undoing/backspacing based on Error Related Potentials (ErrPs).
- In establishing the methodology for assessing social integration we have determined the methodology that would allow us to assess any change in the level of social integration and define the indicators for quantifying this change.

Progress beyond the state of the art and expected potential impact (including the socio-economic impact and the wider societal implications of the project so far)

In order to achieve its ambitious objective MAMEM is pushing the state-of-the-art in a number of related areas, covering the full spectrum of end-user analysis, novel interface design, implementation of interaction paradigms and assessment of the results.
- With respect to end-user analysis there have been many studies focusing on the requirements of people with disabilities, only few of them focus on computer use, habits and requirements. Our activities were designed with the purpose to extract information about: a) what is currently missing from the existing assistive devices, b) which are the activities that are most frequently performed in front of a computer, c) what are the most important elements of an assistive device in terms of ergonomics, etc.
- In what refers to multi-modal interaction in BCIs, we may identify the following contributions: a) Designing an open architecture for on-line and multi-modal BCIs, b) Implementing a novel framework that extents the functionality of a web browser towards augmenting the interaction capabilities of rendered applications with eye-gaze control, and c) Proposing novel ways on how to combine eye-gaze control with EEG-based interaction.
- On the application-side, MAMEM’s contribution to state-of-the-art has been the development of the eyeGUI library and the GazeTheWeb – Browser. By combining these modules MAMEM has implemented a generic framework allowing every application that is rendered through a browser-like environment to be augmented with custom-made interfaces facilitating BCI interaction.
- In assessing social integration, we have pushed the state-of-the-art by developing a model that would incorporate the definition of disability, the definition of social inclusion, as well as a set of social inclusion indicators that could be effectively measured. Although the literature is rather rich when it comes to healthy people, it was found to be deeply lacking and sparse, when it comes, specifically, to people with disability.

Related information

Follow us on: RSS Facebook Twitter YouTube Managed by the EU Publications Office Top