Community Research and Development Information Service - CORDIS

Final Report Summary - CLOUDSCREENS (Support of cloud services for multimedia delivery and consumption)

Clouds dominate the current and the future states of multimedia. The cloud technology enables services from web-mails and online storage, to social networks, mobile applications, and video-on-demand. During the past decade, innovative business models evolved around the cloud technology by providing software applications, platforms, data storage, and infrastructure. Bandwidth and compute intensive multimedia services, e.g., virtual reality and spatial audio are a few examples that can effectively exploit the cloud infrastructure. Besides these, the emerging smart environments, which rely on multi-modal sensing, perception, machine learning and communication methods, in order to transform the way we access and consume content, can well be facilitated with the help of the cloud technology. In this context, clouds host the intelligence and support the advanced interfaces between the end-users and immersive multimedia applications. However, as it is the case with many other digital applications today, cloud-based services are prone to external attacks in an attempt to compromise the data integrity or personal privacy. Therefore, in that process, maintaining the end-to-end data security as well as the personal privacy is of utmost importance and poses great challenges.

CLOUDSCREENS in an EU-FP7 funded Initial Training Network aiming to train researchers on their scientific and technical skills in a unique multi-disciplinary and inter-sectoral research training network. The main objective of CLOUDSCREENS is to conduct research on new topics that contribute to the creation of the future multimedia services, covering the delivery and consumption, by leveraging clouds, device-based personal spaces, and smart environments while handling privacy and security. The project focuses on four interconnected topics, organised into four distinct work packages, each of which is tackled by an excellent Early Stage Researcher: service quality and capacity enhancement in multi-service environments via efficient high volume multimedia delivery, creation of personal spaces and smart environments, and realising new generation of user interfaces taking speech as input, through which user privacy preservation is maintained at a time when cybersecurity concerns are rising. The research on these advanced multimedia-based services do benefit the vast computing and networking resources provided by the cloud to enable the easy deployment of them.

The first research objective of the CLOUDSCREENS project (work package 1) is to design a mediation entity, which is responsible for managing the user-side multimedia resources and the network capabilities and can act as a mediator between the media application and the clouds. In particular, this mediation entity shall be capable of delivering high volume, immersive video signals using perception-inspired techniques, and as such managing the resource and capability balance. Work package 2 focuses on creating personal spaces in a multi-user environment that can be used for providing a highly personalised multimedia usage environment. This is where personal spaces are created with no physical boundaries by advancing the spatial audio rendering technology. In this way, multiple users in the same environment can enjoy their own personal audio zones. The third objective is to develop a smart service that, through negotiation with a managing entity inside the user environment, is capable of enhancing the delivery of the everyday multimedia services. For that purpose, the user environment is covered with multiple multimodal sensing technologies in order to automatically perceive the actions of its inhabitants. The fourth and final work package of the Cloudscreens project focuses on designing a natural user interface, which accepts natural speech as input with a built-in mechanism for protection of user privacy in the wake of ever increasing cybersecurity threats. It is intended to preserve the user privacy through anonymising the user data acquired from multiple connected smart input devices as well as user interfaces (such as speech) using signal processing techniques, without affecting the usefulness of the data for a vast range of Internet-based services.

Another goal at the same time has been to provide high quality training to the ESRs through a diverse set of training events including on-job research training, personal skills development and career development. During their training period, the ESRs have been exposed to the techniques for cloud service design for multimedia delivery and consumption. The training programme has been realised by an international and inter-sectoral consortium. Being exposed to academic and industrial environments early in their research careers has helped the ESRs make informed career choices and strengthened the relationship between the industry and the academia.

The project has resulted in several advances to the state-of-the-art in cloud-assisted multimedia processing. These include the development of: (1) A novel salience tracking method for immersive omnidirectional (360-degree) videos useful in cloud-assisted VR content distribution; (2) A modular setup to generate multiple sound zones within a room consisting of loudspeakers and driving algorithms for self-calibration, filter calculation for sound rendering; (3) A secure speaker de-identification system to be used in modern user devices with speech interfaces; (4) An adaptive control system for personalised multimedia delivery exploiting multi-modal sensing technologies.

Reported by

LOUGHBOROUGH UNIVERSITY
United Kingdom

Subjects

Life Sciences
Follow us on: RSS Facebook Twitter YouTube Managed by the EU Publications Office Top