Skip to main content

Personalised Content Creation for the Deaf Community in a Connected Digital Single Market

Deliverables

Initial requirements and specifications for pilot demonstrators

This report will mainly specify the application requirements, development environments and specifications for the application that will be used for the demonstration scenarios. Key scenarios identified will be included with use cases and evaluation tools for the application. High level object oriented design diagrams along with actors and entities involved will be presented.

Pilot 1 demonstrator architecture, integration plan, and evaluation methodologies

A plan for executing the T5.5 Pilot 1 large scale demonstration of the remote sign-interpretation scenario will be presented, including a full description of the system architecture, integration plan and trials to be carried out. Evaluation methodologies and metrics will also be included, along with a rationale for choosing those methodologies.

Initial reference system architecture

An initial architecture will be proposed, including detailed block diagrams, interface descriptions, operating scenarios, and functionality. The proposed architecture will provide a framework to carry out tasks in remaining work packages. Key technologies to be developed will be highlighted and a high level design approach/success measures will be described.

Initial Report on 3D model capture and rendering

This deliverable will provide information and a description of the WP3 achievements on a yearly basis. It will describe the novel hybrid human body model representation, algorithms for tracking, warping and rendering as well as output samples and evaluation results

Data Management Plan

This document will form the basis for collecting, managing and curating the data generated in CONTENT4ALL to be released as open research data in D5.7. The data management plan will specifically outline how the data is to be used and the legal aspects of sharing, exploiting and commericalizing products based on the shared information.

Project Web-site

An interactive website will be established (www.content4all.com). The website (including FTP services for consortium members) will describe the aims of the project, describe the partners, and will list the publications and standards contributions. It will also allow public deliverables to be downloaded. Links to social networking will also be included in the website. A secure area will be set up for confidential documentation for consortium members. This website will be disseminated in all major events to provide a unified view of the project (T6.2).

Bi-annual project newsletters

Bi-annual project newsletters will provide an insight to the project activities. The newsletter will be created as a multi-media document, including promotional materials which will be created for different audiences - technical audiences and the general public. The latter can be reached through CONTENT4ALL related information in TV programmes, e.g. BBC Click in the UK. The Newsletter will also provide “position statements” from experts in different disciplines related to CONTENT4ALL (T6.2).

Project Leaflet

A leaflet will be produced to publicise the key objectives of the project and will be circulated in all major related events/conferences/forums etc. The leaflet will be maintained as a “live” document throughout the whole project, and updated based on project achievements (T6.2).

Publications

Markerless Multiview Motion Capture with 3D Shape Model Adaptation

Author(s): P. Fechteler, A. Hilsmann, P. Eisert
Published in: Computer Graphics Forum, 2019, ISSN 0167-7055
DOI: 10.1111/cgf.13608

End User Video Quality Prediction and Coding Parameters Selection at the Encoder for Robust HEVC Video Transmission

Author(s): Gosala Kulupana, Dumidu S. Talagala, Hemantha Kodikara Arachchi, Anil Fernando
Published in: IEEE Transactions on Circuits and Systems for Video Technology, 2018, Page(s) 1-1, ISSN 1051-8215
DOI: 10.1109/tcsvt.2018.2879956

Content Adaptive Fast CU Size Selection for HEVC Intra-Prediction

Author(s): Buddhiprabha Erabadda, Thanuja Mallikarachchi, Gosala Kulupana, Anil Fernando
Published in: 2019 IEEE International Conference on Consumer Electronics (ICCE), 2019, Page(s) 1-2
DOI: 10.1109/icce.2019.8662119

Bit allocation and encoding parameter selection for rate-controlled error resilient HEVC video encoding

Author(s): Gosala Kulupana, Dumidu S. Talagala, Anil Fernando, Hemantha Kodikara Arachchi
Published in: 2018 IEEE International Conference on Consumer Electronics (ICCE), 2018, Page(s) 1-4
DOI: 10.1109/icce.2018.8326287

Machine Learning Approaches for Intra-Prediction in HEVC

Author(s): Buddhiprabha Erabadda, Thanuja Mallikarachchi, Gosala Kulupana, Anil Fernando
Published in: 2018 IEEE 7th Global Conference on Consumer Electronics (GCCE), 2018, Page(s) 206-209
DOI: 10.1109/gcce.2018.8574648

Decoding complexity-aware, rate, distortion optimized HEVC video encoding

Author(s): Thanuja Mallikarachchi, Dumidu S. Talagala, Hemantha Kodikara Arachchi, Anil Fernando
Published in: 2018 IEEE International Conference on Consumer Electronics (ICCE), 2018, Page(s) 1-4
DOI: 10.1109/icce.2018.8326110

Neural Sign Language Translation

Author(s): Necati Cihan Camgoz, Simon Hadfield, Oscar Koller, Hermann Ney, Richard Bowden
Published in: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018, Page(s) 7784-7793
DOI: 10.1109/cvpr.2018.00812

An adaptive video streaming framework for Scalable HEVC (SHVC) standard

Author(s): Sarat Rakngan, Thanuja Mallikarachchi, Anil Fernando
Published in: 2019 IEEE International Conference on Consumer Electronics (ICCE), 2019, Page(s) 1-2
DOI: 10.1109/icce.2019.8662075

QoE Modelling of High Dynamic Range Video

Author(s): Carl C. Udora, Junaid Mir, Chatura Galkandage, Anil Fernando
Published in: 2019 IEEE International Conference on Consumer Electronics (ICCE), 2019, Page(s) 1-2
DOI: 10.1109/icce.2019.8662122

Sign Language Production using Neural Machine Translation and Generative Adversarial Networks

Author(s): Stephanie Stoll and Necati Cihan Camgoz and Simon Hadfield and Richard Bowden
Published in: Proceedings of the British Conference on Machine Vision (BMVC), 2018