Service Communautaire d'Information sur la Recherche et le Développement - CORDIS


Executive Summary:
Cerebral Palsy (CP) is one of the most frequently conditions in childhood, with an incidence of 2 per 1,000 live births. In the EU there is 1.3 out of 15 million persons with CP in the world. This neurological disorder affects body movement, balance and posture and almost always is accompanied by other cognitive or sensory impairments like mental retardation, deafness and vision problems. The severity of these problems varies widely, from very mild and subtle to very profound.
These disabilities lead to an inactive lifestyle which reduces the patient’s physical health, social participation, and quality of life. Therapy costs can be up to €45,000 by year, a cost that cannot be afforded by most of the families. Playing Video games is a useful treatment that promotes and maintains more active and healthful lifestyle in these persons. This in turn contributes to reduce medical and social care cost and improve well-being of their families. However accessibility to videogames is hardly applied for them.
GAME-ABLING is a software tool for creating interactive video games in an intuitive manner in such a way that non-expert personnel (e.g. parents) can develop customized games. Games are controlled using body movements and voice. In order to do so, Computer Vision and image processing techniques have been developed to improve accessibility.
GAME-ABLING has created a web portal to make up a disabled gaming e-community in where users have access to use the software tool, play games and upload its own creations to share it. SMEs have envisaged GAME-ABLING exploitation following a subscription business model to have full access to the web portal.
GAME-ABLING represents a business opportunity for the SMEs, who expect to reach 0.5% world market penetration and an accumulated income of €1.1 million in 5 years, generating new business lines to the SME partners as well as obtain international recognition. In addition, GAME-ABLING will contribute to reduce medical and social care costs in the EU by at least 0.5% which supposes additional €50 million only for people with CP.
During the entire project duration dissemination activities were carried out by project partners to promote GAME-ABLING and present the project progress and results to possible costumers, scientific community and general public.

Project Context and Objectives:
GAME-ABLE project started in December 2012 and has lasted 24 months.
WP1 “System Specification”: During the first 3 months of the project, all the SME participated actively in the definition of very accurate system specifications that involved several aspects of the project. The work done was focused in the definition of the architecture system, the impairment modeling and gamed case study, and the definition of the ethical aspects of the projects related to the personal data collection, processing, dissemination, and involved in the training activities. With this information the RTDs defined a series of protocols to deal the personal data generated during this project. All the information gathered was delivered on the document D.1.1, D.1.2, D.1.3, D.2.4 and D.8.4.
WP2 “Image Analysis Algorithms”: The worked performed in this period to capture the movements of the different parts of the body consists of four major phases which we repeated until we obtained the optimal efficiency and accuracy. These four phases are data collection, developing image analysis algorithms, code optimization and testing. In addition, we carried out the analysis of head and hands in parallel since they are two independent problems. We describe them in D.2.1 and D2.3.
Data Collection. We have created a database of videos recorded at gaming sessions in the facilities of the APPC. This database allows testing the algorithms using video sequences in similar conditions as those encountered during the use of the games. Furthermore, we annotated the some images of this dataset manually to train our head and body-part detectors. We also collected another dataset in APPC and developed an annotation software and annotated them in URV. Later, we used the database for refining our classification model. These databases were created scrupulously following the premises described in D1.3 and D7.2, which deal with the ethical issues involved in the obtaining and processing of such data from Cerebral Palsy patients for image analysis purposes
Image Analysis. During the first period of the project, we developed an algorithm for analyzing and tracking the head movements based on skin color segmentation and a state-of-art face detection method. Our developed algorithm works with standard VGA webcams and the depth cameras. Although the algorithm was accurate and fast with health people, however, there were some practical challenges when we tested our first algorithm in real scenario on the patients with various severity level. We observed that the algorithm works accurately with level 1 patients, which are those with the least level of impairment. Notwithstanding, level 4 and level 5 patients (the highest levels) were not able to control the games due to the fact that their facial and body-part appearance as well as movements are much more different than the patients with severity level 1 and 2. Regardless of the severity level, there were also some issues with the ambient light.
In order to solve such difficulties we utilized the depth information for segmenting out the irrelevant pixels from processing. By this way, we could increase the computational efficiency and also make the algorithm more robust against illumination. Then, we used the shape information to find the head of the patients. Using this approach, we could deal with one of the important challenges that we faced in the first period. This challenge is the fact that patients with higher severity level are barely able to control their heads and hold it in a frontal or near-frontal position. As the results, the state-of-are method was not able to detect the face and, consequently, the user was not able to control the games.
In contrast, using our developed novel algorithm, we cope with this problem and we detect the face even in non-frontal positions. The more technical detail of the head detection procedure is provided in D.2.3. Having the detected face, our next goal is to find the facial component such as eyes. In general, this is achieved by examining the image patches extracted from the face region. The extracted image patches represent a raw appearance which are not useful. To extract more compact and discriminative representation from the image patches we need to apply a feature extraction algorithm on the image patch before fetching into the classification model.
With respect to hand movement analysis, several algorithms were implemented for both RGB and RGB+D cameras. First, we considered the problem of analyzing color images from conventional RGB cameras to obtain information about the motion of hands. Our approach focused in the estimation of image regions which color were similar to that of human skin. Additionally to skin color information, we extracted motion information to detect the areas that presented some important movement. Both type of cues were later mixed into a single probability image that was used to segment candidate regions. Connected component analysis extracted features of the candidate regions which allowed us to identify the position of several body parts (head, and both hands).
More specifically, skin-color segmentation was accomplished applying a series of machine learning algorithms that learned the color distributions explaining the skin component on the images. Frist, a coarse segmentation based on hue levels computed from HSV color transformation classified pixels into skin and the rest of the image. A sample of the color in the areas surrounding these pixels were afterwards used in the learning steps of classification algorithms such as k-Means (unsupervised) and SVM (supervised). Once the distributions corresponding to skin were learned, in the classification step, images were automatically segmented into skin-color regions and the rest. This process were accelerated by using several shortcuts to segment only pixels in the color space and create a look-up table that allowed a faster mapping between image pixels and classes. Best results were obtained using a supervised classification scheme.
Several approaches were tested to detect movement in the video sequences. The most basic one consisted in frame differentiation and motion integration. This approach generates an image with all the differential movements between two consecutive frames, which are temporally accumulated to obtain a more precise region. Also more sophisticated background/foreground algorithms were tried, which consider the color distribution of pixels involved in the movement. These algorithms also present temporal learning and forgetting components that allow the incorporation of new region into the moving areas and removing those which has no movement after a while. The work was focused in the integration of such approaches into the common framework and adaption to the type of video sequences we had in this project. Finally, appearance and motion cues are combined into a single probability map, which is later segmented to obtain the candidate regions for analysis. These regions are grouped according to their position and discriminate into bellowing to some part of the body.
The work in the first period was devoted to the extraction of image position of all the body parts (head, left and right hands) simultaneously from recorded video sequences. In the second period we moved the algorithms into analyzing the video sequences directly from camera streams having in mind that the ultimate goal was the control of a game. As a consequence of several testing sessions with patients in real conditions and suggestions obtained from the caregivers, we decided to simplify the process and obtain only the motion of a single moving region in the image. The main reasons for this decision are two: reduce the computational burden of the whole image analysis and increasing the robustness of the process. The first reason is due to the constraint that these algorithms must work in a conventional PC and will probably be running in parallel to other processes. The second one is due to the fact that the user will only use one part of the body to control the game and any other motion, voluntary or involuntary, might compromise the robustness of the process.
The second type of video streams we consider in this project is that of the RGB-D cameras, that is, Kinect and its SDK. This sort of cameras provide depth information which helps greatly in segmenting the user from the rest of the scene. Also, they provide a skeleton streams with information, both 2D and 3D, of a series of joint points that correspond to certain parts of the body. Main problems with using Kinect is that it demands a great deal of computational power to the computer, seamlessly controlling the camera within the framework is not straightforward, as it is not either the implementation of the extraction and processing of the information obtained from the camera. Also, using the SDK implies some compromise with the platform (Windows) employed.
The work done in this stage consisted first in creating a class that allowed the transparent control of the camera in the same terms as the previous RGB cameras, which facilitated the inclusion of Kinects in the Game-Abling framework. Also the retrieval of the streams provided by this camera, color, depth, and skeleton, was tackled and integrated to the general framework. Finally, some filtering process was performed to obtain the corresponding position for controlling games. Main difficulties were the integration of the processes within the complete framework, performed in WP3, which required much testing on this part to allow the functioning of different threats controlling different devices.
Optimization. Powerful feature extraction methods are highly computational and they cannot be used in real-time applications such Game-Abling. We efficiently implemented one of the powerful feature extraction methods to examine the image patches. Next, we needed to train a classification model for detecting the facial components and rejecting the non-facial parts. For this purpose, we trained different models such as support vector machine, Adaboost, Gentleboost, Logitboost and Random Forest and tested their run-time efficiency and classification accuracy. Finally, we chose Adaboost as our final model because of its speed and accuracy.
Beside this algorithm, we also modified the face tracking algorithm provided by Microsoft Kinect SDK and developed our second face tracking method. For example, the tracking algorithm of the Kinect SDK does not provide the 2D/3D motion and the face location. We computed these information by doing more processing and make the both algorithm consistent. The first algorithm which has been completely developed in URV is cross-platform, fast to initialize, independent of face detection. It also provides more information about the motions. These algorithms provide extra information about 2D and 3D motions from the one stated in the document of work.
Testing. For evaluation purposes, also three games (apart from the games have been developed for the project) were developed in URV and used to play with head motions by patient with different severity level. The technical details of the implementation has been explained in deliverable D2.3.
The results have been documented in D.2.2 as part of the backing for the GO decision. A part from the work done on vision algorithms, at the beginning of the project the ethical issues corresponding to the process of videos sequences have been dealt as part of the ethical aspects that affect this projected required (D.2.4).
WP3 “User Input Modules”: A series of libraries have been developed to access different input devices, mainly game controller devices and the microphone. These libraries allow an application to read inputs from the following list of devices (specified by the customer partners), such as Nintendo Balance board, Usual game controllers (joysticks, gamepads, switch buttons), Nintendo Wiimote controller, Keyboard and mouse, and Microphone. The inputs from these devices can now be read, adapted when necessary and used by games to perform different game actions. The previous list of devices has already been integrated by means of static libraries to the game development framework (WP4). As regards the audio devices, a simple method to capture and measure the intensity of the voice or blows into the microphone allows to use this device as an input to control some game actions. Additional features have been developed like audio effects and an easy library to play MIDI musical notes, using a selectable instrument in order to be used in music games.
In the second period, a series of libraries have been developed to access different input devices, namely game controller devices and the microphone. These libraries allow an application to read inputs from the list of devices specified by the customer partners, such as Nintendo Balance board, usual game controllers (joysticks, gamepads, switch buttons, Nintendo Wiimote controller, keyboard and mouse, and microphone. The inputs from these devices can now be read, adapted when necessary and used by games to perform different game actions. The previous list of devices has already been integrated by means of static libraries to the game development framework. As regards the audio devices, a simple method to capture and measure the intensity of the voice or blows into the microphone allows to use this device as an input to control some game actions. Additional features have been developed like audio effects and an easy library to play MIDI musical notes, using a selectable instrument in order to be used in music games. The final part in the integration process, the inclusion of color and Kinect cameras, has been accomplished during this period. Two different modules for integration for integrating Microsoft Kinect and color cameras were developed. The system is able to detect any device connected to the computer and select to control the games.
WP4 “Game Development Framework”: Main work performed in WP4 consists of the development of the common game framework in the programming language C++ utilizing the programming framework SFML. A configurable launcher has been created for two different kinds of games, the XY and the "Spot the difference" games. This launcher is capable of loading the game description and assets, loading different input modules, displaying a control panel and finally allowing the user to play the game. The dynamic loading of dynamic libraries (DLLs) has been implemented to allow flexibility in the development and use of the game platform, along with the exposure of each DLLs configuration options to the control panel.
The work in the second year focused on the development of a software framework that assists in the development of configurable games and the integration of general purpose and specialized input devices in order to support a wide range of patients/disabilities. The approach followed was to abstract user input and to decouple devices from games, which facilitates the adaptation of games to future use cases. Moreover, a modular architecture was designed to allow for the independent development and deployment of base games and input modules. This is important as it helped the development throughout the project lifetime but also facilitates future developments and allows the introduction of a licensed development scheme; allows the commercialization of modules; helps the deployment process when different platforms are involved and when upgrading an existing installation. Finally, WP4 also provided a central application that coordinates all existing modules and presents the user with a single environment through which to configure and control game sessions
WP5 “Game Authoring Tool”: The main work that was completed in WP5 consists of the development of the authoring tool for the XY game as a web application, capable of being executed both online and offline. Utilizing html5 and JavaScript and in combination with a number of JavaScript libraries, a user friendly method was implemented to allow novices users to create interactive games in the context of the XY game. The authoring tool allows the loading of user-created assets, like avatars, sounds, backgrounds, their position in the window and the configuration of the logic. At the end the authoring tool saves a configuration file, which is ready to be loaded by the common game framework developed in WP4.
The main objective was to provide an intuitive, easy to use with no special expertise required, authoring environment that combines configurable games and the various game elements into complete – ready to played - games. The approach followed was the development of a web application, capable also to be executed offline and supporting all major browsers and a step by step approach in developing the games. A common look and feel was adapted across all base games, although the game play of each base game is considerable different.
WP6 “Analysis Tool and Activity Database”: The database schema that has been used to save the data has been designed and currently implemented in the database management system MySQL. A module was developed for the common game framework, which populated this database with data, while the activity reporting tool read it in order to create graphs related with the patient performance.
The activity during the second year was dedicated to the development of a relational database, capable of storing two main set of information: configuration of individual games/users and the actual data of the played games. The purpose of this database is to use data to analyze the development of the patient. The analysis tool was developed in order to study the information stored in the database. This tool allows to access, combine and visualize information stored in the database according to criteria of specialized personnel (therapists, caregivers, and psychologists), so they get the best possible profit of the activity database. It is a graphical web interface tool with diagrams and statistics, which inform the interested users of the evolution of each patient, and based on this information, they are be able for further customization the games for each patient.
The second main objective of WP6 was the development of an algorithm and its respective software to anonymize the data in the database, in order to facilitate external expert to study the data without publishing the identity of the patients. Moreover, one of the main tasks of URV was focused on the creation and maintenance of a huge multimedia database which contains specific data of the APPC patients. Such data were utilized by our computer-vision methods in order to achieve the requirements specified by the project and also to improve the quality of the algorithms. In the next paragraphs, we are going to explain/summarize the list of tasks and achievements performed during the project.
As aforementioned, one of the main tasks of the URV was focused in the creation and maintenance of a huge multimedia database with specific data of the APPC patients. This database was performed using data gathered from real patients at the APPC facilities. Data is securely stored in the URV facilities, more concretely in our laboratory, where only authorized personal can enter with an electronic key with permissions. Here, we store both paper documents and multimedia files. The multimedia data is stored in a computer which has no internet connection and a removable SED (self-encrypting device) which encrypts the data stored, making it impossible to use without the corresponding decryption key, only managed by the responsible. As the Spanish law requires (i.e. Spanish LOPD) we performed a complete documentation in regards of the manual and multimedia data which needed to be stored in our facilities. This documentation can be found in the Annex Section of D6.2.
Another contribution was related to the Game-Abling database, we helped in the design and implemented the Role-Based Control Access (RBAC). Therefore, access is controlled depending on the roles that users have within the system. Moreover, we helped and supervised the way the database should store information, recommending not to store sensitive data, such as the relations between patients and its level of palsy.
In regards to the private data release module, we studied the state-of-the-art of Statistical Disclosure Control (SDC) in order to find suitable methods to protect the kind of data being stored. Although many candidates were possible, we focused in two well-known micro aggregation algorithms, namely MDAV and VMDAV. With such methods, we can guarantee k-anonymity to users involved, thus guaranteeing their privacy. We implemented and tested these algorithms with toy models in regards of the information loss and privacy protection using well-known state-of-the-art metrics (i.e. sum of squared error and disclosure risk). Finally, we managed to include some options in the algorithms in order to deal with non-numerical data. The code can be modified to achieve further requirements. Detailed explanations of our work and the aforementioned tasks are presented in D6.2 document.
Finally, although our tasks were finished at month 20 and reported with D6.2, we continued our contributions to the project. On the one hand, we finalized code optimizations and adaptations of the obfuscation algorithms. The code has to be adapted in the case we use categorical or ordinal attributes. On the other, we need to keep the secure maintenance of the data stored in our facilities. Moreover, we used this data to train our models and refine the computer-vision algorithms. Finally, we had a remarkable contribution in the test and revision loop, which involved the testing of the algorithms at the APPC facilities and the updates performed in the secure database.
WP7 “Evaluation”: Testing of the Game Authoring Tool (GAT) and Rehabilitation Games developed for patients with Cerebral Palsy of different levels of motor impairment and age was done. This was done by specialists from International Clinic of Rehabilitation (ICR), Associació Provincial de Paràlisi Cerebral de Tarragona (APPC) and other partners. Evaluation of the GAT was done by the following procedure. Therapists were taught how to use the GAT and supplied with the assets to develop games. They also learned how to use games with different gaming hardware – balance board, Kinect sensor, camera, special goniometer joystick, and others. They developed their own games that were checked by a supervisor. Totally 12 games were selected for testing on patients with Cerebral Palsy. Therapists filled out the questionnaire that was analyzed and conclusions about the GAT usability have been made. Evaluation of the games was done on 32 patients with Cerebral Palsy of different level of motor disability. Every patient participated in 6 to 8 gaming sessions of 15-20 min duration under the supervision of the therapist. During training sessions different gaming hardware has been used – Balance board, Kinect sensor, camera, special goniometer-joystick and keyboard or mouse. Parents were interviewed and filled out the questionnaire that was later analyzed. Parents and therapists have also give important suggestions for future developments and improvements.

Project Results:
The following are the list of main S&T results according to the goals of the project.

Play and physical activity is vital to disabled children and young people’s health and wellbeing.
However, his degree of participation in active leisure activities is often extremely limited due to limited range of adapted activities and the intensive support required by this population to participate in leisure activities. Although video games can serve to extend their repertoire of leisure activities encouraging players to be more physically active, a lot of people with impairments are excluded from the video games world because of accessibility.

The potential of video games for disabled people entertainment while improve their physical activity has been demonstrated, however the investment of game industry is committed for a more profitable purposes. This creates barriers to independent game developers and inhibits the introduction of new game genres such as game accessible to communities with special needs. There is a clear need to develop accessible games to disabled people cost-effective for game industry.

The Overall Objective of GAME-ABLING project is to create, in a cost-effective manner, interactive games well adapted to people with any type and degree of disability with the aim of improve their physical activity. To this end, GAME-ABLING proposes to develop:

a. A game authoring tool to allow the creation of video games in a cost-effective manner. A game authoring tool is a specialized software that are used to simplify the tasks of creating game interface including the capabilities to create, edit, re-view, test and configure your own video game. The use of a game authoring tool will allow the cost-effective creation of accessible games due to with this tool will be possible to design games without any programming skill.

b. Human Computer Interaction based on Computer Vision to allow the game ac-cessibility regardless of type and degree of disability.

These two global objectives have been achieved during the consecution of this project.

The Scientific Objective of GAME-ABLING is to develop innovative Human Computer Interaction
(HCI) based on Computer Vision to allow people with any type and degree of disability control the games. The HCI used will be patient’s voice (recorded by a wireless microphone) and movement of one or multiple parts of patient’s body (recorded by a webcam or depth camera). Due to the small range of motion capabilities as well as unstructured and sometime unpredictable movements produced by disabled people, current vision-based HCI systems do not fit the necessary requirements so a fine motion tracker will be developed. The use of this new HCI Interaction will allow the creation of game accessible to anybody due to motion tracker is specifically designed for disabled people. The main challenge in the HCI system is to ensure the usability and playability of the game controls.

This objective has been accomplished, though there was some adaptions of the original project’s goals. According to the requirements that the experts made on their needs, some of the objectives were simplified. Specifically, it was agreed that in order to control the game it was not necessary to track the shape of the different body parts, like hands, mainly because it was very hard for most of the patients to move their hands and fingers. Therefore most efforts were put into extracting the motion from head and upper limbs as a single element, making no distinction between arm and hands. This way we have tried to adapt our system to the real requirements faced among the patients in the partner institutions.

The Operational Objectives of GAME-ABLING project include:

1. To define the system specifications (WP1). Overall architecture of GAME-ABLING platform will be defined as well as the impairment modelling. According with the type and degree of disability, the most suitable games and input channels will be selected.

a. Milestone: MS1 at M3
b. Deliverables: D1.1 at M3 and D1.2 at M3

Some of the original goals were adapted to the current situation of some of the pa-tients that were involved in the project. The main change was the simplification of the requirements of game controls based on hand shapes. That was a consequence of the limitation of most of CP to move their hands and, especially, their hands. Also it was stated as unnecessary to track different parts of the body at the same time to control the games, which would end up in a computational burden that would prob-ably slow down the execution of the platform in most conventional computers. Therefore, there has been some of the features have been simplified and adapted to the real requirements of functioning.

2. To develop image analysis algorithms (WP2). Depending on type and degree of disa-bility, gamer will use the movement of one or several parts of his body to interact with the game. Computer vision techniques will be used to track head, eyes, arms, hands and legs as well as to detect some basic gestures like hand shapes(open hand, closed fist or hand pointer shape), aperture of mouth and opening/closing the eyes. The tracking will be made from images recorded by a webcam or a depth camera and using some available libraries like OpenCV and OpenNI.

a. Milestones: MS2 at M9 and MS5 at M12
b. Deliverables: D2.1 at M9 and D2.2 at M18

This WP has attracted most of the efforts developed in the project. We have focus in developing different algorithms that allowed the detection and tracking of different part of the body that are thought to be useful to control the games, namely, the head and upper limbs. There were also developments in the field of facial gesture detection and recognition. From a practical point of view, we faced a major problem with facial expressions which made not advisable to use them to control games. The main difficulty was in the extreme appearance of most of patients with higher level of disability, which made difficult to use available methods to detect their expressions for not being consistent with common facial expressions found in able people. In the case of hand shapes, the problem was related to the difficulty of most patients to move their hands. Therefore, it was not practical to use either facial expressions or hand shapes to control games in our context.

3. To design user input modules (WP3). Inputs to control games are multiple and will be selected depending on the abilities of each patient. GAME-ABLING will use two dif-ferent input devices; a wireless microphone to record patient’s voice and a webcam to record movement of one or several parts of the patient’s body. In addition the GAME-ABLING platform will be designed to support different types of hardware in-put devices such as mouse, keyboard, joystick, Wiimote, data gloves, goniometric devices, balance board, RGB-Depth camera (Kinect, Asus Xtion Pro live or OpenNI compatible).

Different software libraries will be developed to get basis access to these input de-vices and to translate them to a uniform input format. Using a uniform input format, the control of games will be more independent from the device that has generated the input, so a videogame could work in the same way using different input devices. In addition, the inclusion of new input devices to the GAME-ABLING platform will be easier.

a. Milestone: MS3 at M9
b. Deliverables: D3.1 at M9 and D3.2 at M12

The development of libraries to control a numerous set of devices was accomplished, with an important effort devoted to the visual devices, RGB and RGBD cameras. In the latter case, we have focused only in the use of Kinect cameras for being more commonly available. Nevertheless, some of the CV algorithms are cross-platform and do not depend on the camera employed.

4. To design the game development framework (WP4). The game development framework consists on a set of high level libraries or modules focused on different aspects of the videogames (graphics, sound, user inputs, game statistics, logic and graphic user interface). The level of abstraction of this game development frame-work will be high enough to allow a very easy and simple development of game in the authoring tool.

a. Milestone: MS4 at M9 and MS6 at M18
b. Deliverables: D4.1 at M19 and D4.2 at M18

The objectives were accomplished and a game framework was developed to config-ure and run any game developed using the authoring tool. This framework is able to detect the presence of different devices to control the game and calibrate their re-sponse in accordance to adapt to each patient’s needs. It also allows to incorporate and generate personal data in combination with the analysis tool.

5. To design the authoring tool (WP5). The authoring tool is the user interface that care-givers, therapists or familiars in charge of disabled people will be used to create the games. GAMEABLING authoring tool will use an icon based programming language to allow the creation of games in a simple, direct, and intuitive manner. Each icon will represent a particular input/output event. Creating a game with this authoring tool will be as simple as drag and drop block icons, configure properties of the block and interconnect the icons with arrows to associate user actions to game behaviors. The final diagram designed by the user will be scripted through the level libraries of the game development framework developed. Once a videogame is created it could be run directly clicking on the generated file.

a. Milestone: MS7 at M20
b. Deliverables: D5.1 at M20

The authoring tool developed during the project allows the creation of games in a certain number of categories that were agreed to be important for the therapies that are usually provided to CP patients. Instead of being a general tool to create any type of games, it was judge to be more convenient to create any number of games in a fixed set of type of games that could be of interests in the therapist community. The tool has achieved the goals of simplicity in its usage while still allowing enough flexi-bility to create many different games.

6. To develop analysis tool and activity database (WP6). Activity parameters will be stored in a database according to criteria of specialized personnel (therapists, care-givers, physiologists) to study the evolution in the behavior of children in the gaming sessions and determine the physical and psychological benefits.

a. Milestones: MS8 at M20
b. Deliverables: D6.1 at M20 and D6.2 at M20

The final result of this WP has been the development of a tool that allow to manage the information created during the usage of a game. The data can be plotted to ap-preciate the tendencies of the patients providing a valuable information to the care-givers and therapists.

7. To develop games for evaluation purposes (WP7). The objective of this work package is to evaluate the usability of the game authoring tool developed. Caregivers and therapists from SoM, ZOIS, APPC, EFPPA and ICR with any skill on programming will use the authoring tool to create different types of games. They will fulfill question-naires about usability satisfaction. In addition, the games developed will be evaluated through game performance, researcher observations and player feedback (if the type and degree of disability allows it). Physical and physiologist’s benefits such as self-esteem, self-confidence of games will be determined.

a. Milestone: MS9 at M24
b. Deliverables: D7.1 at M24

This goal has been accomplished by means of the development of all the previous tools. The final result is a platform that allows the creation of games and also playing with them while obtaining certain elements of information from the patients which are important from the medical point of view. The main purpose, that the games would be easily created by non-programmers, has been completely accomplished.

8. To carry out dissemination and training activities (WP8). Different training sessions guided by CRIC, URV, ICCS and ICR will take place to ensure assimilation of fore-ground by all SMEs, especially at the APPC. Also, non-confidential information of the project and its results will be disseminated beyond the consortium to a wide audi-ence, to maximize the project’s impact.

a. Milestones: MS10 at M24 and MS11 at M24
b. Deliverables: D8.1 at M3, D8.2 at M24

An important effort has been carried out to train the personnel at the APPC to under-stand the functioning of the different tools developed in this project. Specially, we have focused in the running of the different games using a diversity of controlling de-vices and also in the creation of several games so they can be autonomous in the use of the platform. Regarding dissemination activities, some activities has been carried out in order to keep up to date all stakeholders, specially all possible costumers as well as the scientific community. Partners have also planned a future dissemination schedule in order to continue information the development progress and the release of the final commercial platform (hopefully Summer 2017.)

9. To assure the exploitation and foreground management (WP9). A variety of business models including advertising-based model, subscription-based model, storefront model and freemium model will be analyzed as exploitation route for GAME-ABLING platform. A GAMEABLING web portal will be created in where users would download the authoring tool and run it locally to develop games, browse for a game and down-load it to play or even to upload own games created to share it. In addition, the Plan for the Use and Dissemination of the Foreground (PDUF) will be defined.

a. Milestones: MS12 at M24
b. Deliverables: D9.1 at M12, D9.2 at M24 and D9.3 at M24

This goal has been accomplished and summarized in the Plan for the Use and Dissemination of the Foreground (PUDF). An in-depth search of the different business models used in the gaming industry has been done, all they carefully compared and the choice of the one that best fits the GAME-ABLING philosophy made. SME have also planned different ways in order to improve the current platform and get the market in the faster, more competitive and advantageous way.

10. To carry out the Consortium Management (WP10). The objective is to optimize the application of technical resources, review and assess the work carried out and ensure all aspects of the EC requirements for communication and reporting are met.

a. Deliverables: D10.1 at M2

The objective has been accomplished thanks that all partners have allocated enough resources. The coordinator has monitor all partners’ effort and has ensured to meet required milestones. Communication inside the consortium has been fluent and all partners have had an active role in the evolution of the project.

Potential Impact:
GAME-ABLING is a portal that includes a software tool that allow create interactive video games customized to Cerebral Palsy patients. In order to adapt the technology to its users, games are controlled by a new developed move and voice recognition control system.
Cerebral Palsy is one of the most frequently conditions in childhood, nowadays there are 17 million people with CP in the world, with an incidence of 1 per 500 people.
Summary of GAME-ABLING benefits:
1. Benefit for patients: Persons with CP require treatment optimization to achieve more physically active lifestyles, development of coping skills, stress reduction, companionship, enjoyment, relaxation and a positive effect on life satisfaction. Playing videogames is a useful treatment to promote and maintain more active and healthful lifestyles in these patients. A platform like GAME-ABLING represents an additional opportunity to increase the wellness and fitness of these patients who currently have access to a wide range of video games adapted to their specific needs and requirements.
2. Benefit for therapists and caregivers: GAME-ABLING platform provides feedback of the performance and evolution of each patient. These data is stored into an activity database in order to be available for caregivers and therapists. This information has a high value in order to follow-up patient progress and monitor benefits of the use of the games, and also to adapt games, difficulty, etc. to each patients in a case-by-case study.
3. Benefits for healthcare system: GAME-ABLING will contribute to reduce healthcare costs because therapists would not always need to be present to guide a patient. They also could monitor several patients at once. GAME-ABLING allow persons with CP to play at home with their caregivers, parents or tutors, or even online under physical therapist supervision, thus decreasing the need to travel to rehabilitation centers. Play at home have unique advantages and eliminates the challenges of transportation and accessibility of stores and buildings, which are important barriers to physical activity in persons with chronic physical disabilities.
4. Benefits KYY, UBITECH, CREA, ZOIS and MAC:
They will get direct benefits of exploiting GAME-ABLING Platform. They have envis-aged the GAME-ABLING exploitation following a subscription business model in where a customer have to pay a subscription price of 10€/month to have access to the GAME-ABLING portal. All SME partners could benefit from the subscriptions and advertisements revenues.
Also, they will enhance his competitiveness in the Health Care industry as these SMEs could become leaders in the market of ICT for home or social care.
The SME partners will create new client bases at international level thanks to the worldwide commercialization of GAME-ABLING.
Some particular benefits for some of the partners are as follows. For KYY and CREA are creating an international recognition for its labor is improving the social problem of cerebral palsy. For UBITECH and MAC have accessed to disabled people market.
Additionally, some of the expected benefits for APPC and EFFPA are that they have free access for the use of the GAME-ABLING platform. Within this, they can reduce their medical and physiotherapy costs. With the use of GAME-ABLING, therapists could monitor several patients at once or even would not be present to guide a pa-tient and also.
Persons with cerebral palsy will be able to play at home with their caregivers, parents or tutors improving their physical activity reducing the medical cost associated to muscular problems.

In order to prepare the Consortium to the exploitation, once the technology is ma-ture enough, a study of different business models has been done and it is compiled in the PUDF. From the 30 most used models related to video games, the 7 most rele-vant ones for GAME-ABLING were summarized and adapted to partner’s needs. As for the initial agreement between the consortium members, the SME partners jointly own the Foreground generated during the project in equal shares. The terms and conditions for managing and exploiting the joint Foreground have already been dis-cussed and a draft of final decision written in the Final PUDF.
The SME partners of the consortium (KYY, UBITECH, CREA, ZOIS and MAC), once the platform will be available in its commercial version, will all benefit evenly from the subscriptions and advertisements revenues and a clear competitive edge from early access to the market before their competitors. Meanwhile and when the commercial version is available, partner APPC and EFPPA will get free access for the use of the platform. Each of the partners has identified, estimated and quantified the potential benefits within a 5-year period. So far, they are preparing a Fast Track for Innovation project in order to move forward in the technology development and approach to a more product-like platform.

For protecting the technical aspects of the technology the industrial secret has been used so far. SMEs has agreed a plan for the further development in order to improve the actual prototype to a commercial product, namely improvements in the function-ality, appearance, some more trials, etc.

A project logo was created and used in all the dissemination material. A large number of dissemination activities were performed during the two years project period in-cluding: Official Project webpage, presentation to meeting, fairs and workshops, vid-eo presenting the technology, interviews in nationals TV and radios, etc

List of Websites:

Informations connexes


Anna Szathmary, (Project Manager)
Tél.: +34932049922
Numéro d'enregistrement: 182668 / Dernière mise à jour le: 2016-05-11
Source d'information: SESAM