Deliverables
Network compression leveraging i the Bayesian inference arguments that underly the envisaged machine learning models of WP4 and WP5 ii network distillation approaches
Generative system translating language to SL gesture trajectoriesThe algorithms developed for translating language (speech/text) to SL trajectories, trained using the dataset of D5.1, and embedded into an AR environment.
System translating SL footage to languageThe Deep Learning models aiD will devise to address the problem of generating text transcription and synthetic speech pertaining to SL video footage trained using the dataset of D41
A software implementation of the three envisaged demos/pilots.
Pilot Deployment and Evaluation"This deliverable deals with the deployment in a real-world setting of the three pilots, and their evaluation by (anonymous) volunteer users (deaf individuals) or professional SL interpreters.Specifically, regarding the AR service: HFD and EUD will enrol volunteer SL users who will act as the evaluators of our demonstrator; HFD and EUD will achieve this by reaching out to their members. The volunteers will spend some time using the developed AR news service, and will provide us feedback on both the quality of the AR-based SL footage (accuracy, consistency) and the real-time behaviour of the solution (e.g., computational lags that hinder real-time performance, computational requirements, etc.). To this end, HDR and EUD will develop appropriate questionnaires. The enrolled volunteers will perform evaluation using portable devices provided by aiD, which will have aiD software installed and running. Regarding the automated Relay Service prototype: HFD will enrol volunteer SL users who will act as the evaluators of our demonstrator; HFD will achieve this by reaching out to its members. The volunteers will be asked to use the Relay Service in specific mock-up scenarios, designed by ANT and implemented in the premises of ANT. Then, the volunteers will provide us feedback on the quality and timeliness of the service: that is, accuracy and consistency in interpreting what they ""say"" in SL to the hearing operators, as well as time needed for the system to (correctly) perform the interpretation task. The evaluation will be performed using appropriate questionnaires developed by ANT.Regarding the Interactive Digital Tutor prototype:HFD will take the lead in developing some teaching materials for first-grade deaf children, using their members of staff who are special education teachers. These are some simple reading material for first-graders and the corresponding foundational SL primitives needed for their translation. Subsequently, HDR will use the outcomes of WP4-WP6 so as to develop an automated interactive tutoring system for this material. The main principle behind the envisaged pilot is that the user will be able to ask the system repeat a word, a phrase, or a larger excerpt they want to see again translated in SL gestures. In addition, the user will be able to perform some SL gestures (similar to what they have learned through the system) and see how these are interpreted in SL. Evaluation will be performed by having HFD and EUD members of staff provide us feedback on the quality of the teaching system. We focus on both the capacity of its interactive functionally to improve the learning outcomes for deaf first-graders, as well as its translation accuracy and computational speed/responsiveness."
A public website and accounts in social media Publicity material including videos describing the project rationale and ambition
Publications
Author(s):
Sergis Nicolaou, Lambros Mavrides, Georgina Tryfou, Kyriakos Tolias, Konstantinos Panousis, Sotirios Chatzis, Sergios Theodoridis
Published in:
Proceedings of SPECOM 2021, 2021
Publisher:
Springer Nature
Author(s):
Panousis, Konstantinos Panagiotis; Chatzis, Sotirios; Theodoridis, Sergios
Published in:
Proceedings of ISVC 2021, 2021
Publisher:
Springer
Author(s):
Konstantinos P. Panousis, Sotirios Chatzis, and Sergios Theodoridis
Published in:
NeurIPS Workshops 2022, 2021
Publisher:
NeurIPS
DOI:
10.5281/zenodo.6000329
Author(s):
Andreas Voskou, Konstantinos P. Panousis, Harris Partaourides, Kyriakos Tolias, Sotirios Chatzis
Published in:
Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023. p. 1966-1975, 2023
Publisher:
ICCVW2023 - ACVR
DOI:
10.48550/arxiv.2310.04753
Author(s):
Konstantinos Panousis, Sotirios Chatzis, Antonios Alexos, Sergios Theodoridis
Published in:
Proceedings of AISTATS 2021, 2021
Publisher:
PMLR
DOI:
10.5281/zenodo.5498188
Author(s):
Konstantinos P. Panousis, Anastasios Antoniadis, Sotirios Chatzis
Published in:
Proceedings of AAAI 2022, 2022
Publisher:
AAAI
DOI:
10.5281/zenodo.6000363
Author(s):
Konstantinos Panousis, Sotirios Chatzis
Published in:
Proceedings of NeurIPS 2023, 2023
Publisher:
Konstantinos Panousis
DOI:
10.48550/arxiv.2310.04929
Author(s):
Voskou, Andreas; Panousis, Konstantinos P.; Kosmopoulos, Dimitrios; Metaxas, Dimitris N.; Chatzis, Sotirios
Published in:
Proc. ICCV 2021, 2021
Publisher:
ICCV
DOI:
10.5281/zenodo.5498338
Author(s):
Konstantinos Kalais, Sotirios Chatzis
Published in:
Proc. ICML 2022, 2022
Publisher:
ICML
DOI:
10.5281/zenodo.6580332
Searching for OpenAIRE data...
There was an error trying to search data from OpenAIRE
No results available