European Commission logo
français français
CORDIS - Résultats de la recherche de l’UE
CORDIS

Child-Robot Communication and Collaboration: Edutainment, Behavioural Modelling and Cognitive Development in Typically Developing and Autistic Spectrum Children

Livrables

Free data deliverable: gesture, speech and behavioral data

Free data deliverable consists of: 1) annotated gestural video data, 2) transcribed spoken dialogue (audio) data, 3) behavioural data labelled with affect and cognitive state, 4) audio-visual data annotated with actions and intent from the use cases of BabyRobot. The amount of data to be transcribed will be defined by M12 based on the needs for training and evaluating statistical models for user cases 1,2 and 3. Making audio-visual recording of children public raises many privacy-related, legal and ethical issues; for this purpose only secondary/derived data (annotations, transcriptions, audio/visual features etc.) will be made publicly available.

Open-source software for robot learning and behaviour based control

Open-source software for robot learning and behaviour based control will include algorithms for imitation learning based on inverse reinforcement learning, structured classification algorithms, as well as, a behaviour-based control module for the humanoid robot platforms, which will incorporate different levels of abstraction, including both symbolic and sub-symbolic layers.

Open-source software for multimodal, multiparty human-robot interaction

Open-source software for multimodal, multiparty human-robot interaction will include significant portions of the communication and collaboration modules (including the latest version of IRIS dialogue management software), as well as the majority of the codebase for KASPAR use-case 3.

Open-source software for socio-affective state monitoring and visual tracking

Open-source software for socio-affective state monitoring and visual tracking will include software for emotion recognition from speech, emotion recognition from text and visual tracking software from video.

Interim Progress Report on audio-visual processing and behavioural informatics

Interim progress report on the development of the core audio-visual processing technology for extracting information from the microphone and cameras on the robot. These technologies are meant for the creation of robots that analyze and track human behaviour over time in the context of their surroundings (situational) in order to establish common ground and intention-reading capabilities.

Initial Report on Dissemination, Exploitation and Intellectual Property

Initial report on the: (i) dissemination activities towards industry and academia, (ii) exploitation actions, (iii) management of intellectual property issues.

Interim Report on multiparty child-robot collaboration and learning

Interim report on the design, development, and evaluation of the multiparty communication and collaboration capabilities of the KASPAR robot. These capabilities will be grounded on child-child and child-robot interactive scenarios that constitute the third use case.

Interim Report on human-robot interaction and communication

Interim report on the design, development, and evaluation of the communicative and interaction capabilities of the robot using both the gestural and spoken dialogue modalities. These capabilities will be grounded on child-robot interactive scenarios that constitute the first use case.

Interim Progress Report on core robotic functionality

Interim progress report on the methods and software modules needed to endow the humanoid robot platforms with core functionalities, focusing mainly on: (i) gestural kinematics, (ii) environment interaction skills, (iii) imitation learning, (iv) behaviour-based robot control architecture.

System Architecture, Use Case 1 Specification and Data Collection Protocols

Definition of the architecture and functionality of the communication and interaction modules of the robotic platform. Definition of the interaction scenarios for use case 1. Definition of the data collection protocols for use case 1.

Final Report on child-robot communication and learning

Final report on the design, development, and evaluation of the communication and learning capabilities of the ZENO robot. These capabilities will be grounded on child-robot interactive scenarios that constitute the second use case.

Initial Progress Report on Core Robotic Functionality

Initial progress report on the methods and software modules needed to endow the humanoid robot platforms with core functionalities, focusing mainly on: (i) gestural kinematics, (ii) environment interaction skills, (iii) imitation learning, (iv) behaviour-based robot control architecture.

Final Report on multiparty child-robot collaboration and learning

Final report on the design, development, and evaluation of the multiparty communication and collaboration capabilities of the KASPAR robot. These capabilities will be grounded on child-child and child-robot interactive scenarios that constitute the third use case.

Final Report on core robotic functionality

Final report on the methods and software modules needed to endow the humanoid robot platforms with core functionalities, focusing mainly on: (i) gestural kinematics, (ii) environment interaction skills, (iii) imitation learning, (iv) behaviour-based robot control architecture.

Final Report on human-robot interaction and communication

Final report on the design, development, and evaluation of the communicative and interaction capabilities of the robot using both the gestural and spoken dialogue modalities. These capabilities will be grounded on child-robot interactive scenarios that constitute the first use case.

Initial Report on Child-Robot Communication and Learning

Initial report on the design, development, and evaluation of the communication and learning capabilities of the ZENO robot. These capabilities will be grounded on child-robot interactive scenarios that constitute the second use case.

Final Report on audio-visual processing and behavioral informatics

Final report on the development of the core audio-visual processing technology for extracting information from the microphone and cameras on the robot. These technologies are meant for the creation of robots that analyze and track human behaviour over time in the context of their surroundings (situational) in order to establish common ground and intention-reading capabilities.

Initial Report on Human-Robot Interaction and Communication

Initial report on the design, development, and evaluation of the communicative and interaction capabilities of the robot using both the gestural and spoken dialogue modalities. These capabilities will be grounded on child-robot interactive scenarios that constitute the first use case.

Use Case 2 Specification and Data Collection Protocols

Definition of the communication, collaboration, and language learning tasks of use case 2. Definition of the data collection protocols for use case 2.

Initial Progress Report on Audio-Visual Processing and Behavioral Informatics

Initial progress report on the development of the core audio-visual processing technology for extracting information from the microphone and cameras on the robot. These technologies are meant for the creation of robots that analyze and track human behaviour over time in the context of their surroundings (situational) in order to establish common ground and intention-reading capabilities.

Interim Report on child-robot communication and learning

Interim report on the design, development, and evaluation of the communication and learning capabilities of the ZENO robot. These capabilities will be grounded on child-robot interactive scenarios that constitute the second use case.

Interim Report on Dissemination, Exploitation and Intellectual Property

Interim report on the: (i) dissemination activities towards industry and academia, (ii) exploitation actions, (iii) management of intellectual property issues.

Final Report on Dissemination, Exploitation and Intellectual Property

Final report on the: (i) dissemination activities towards industry and academia, (ii) exploitation actions, (iii) management of intellectual property issues.

Updated Dissemination and Exploitation Plan

Updated dissemination plan in order to establish the means and procedures to communicate the scientific and technical advancements of the project. Updated exploitation plan for all the technologies, data pool, robotic platform and services developed in the project.

Use Case 3 Specification and Initial Report on multiparty child-robot collaboration and learning

Specification of use case 3 using the KASPAR robot. Specification of data collection and analysis. Initial report on the design, development, and evaluation of the multiparty communication and collaboration capabilities of the KASPAR robot. These capabilities will be grounded on child-child and child-robot interactive scenarios that constitute the third use case.

Demonstration of communication skills learning for TD and ASD children use case 2 scenario

Demonstration of communication skills learning for TD and ASD children use case 2 scenario where the ZENO robot can “play” with the child both in a structured way, but naturally also in a creative sense, demonstrating that the robot and child can share intentions towards a common goal. The robot shall support the child in solving tasks in building specific things with bricks. The interaction will be videotaped and showcases at the project website.

Demonstration of communication skills learning use case 3 scenario

Demonstration of communication skills learning use case 3 scenario where we study learning and communication of children with ASD in dyadic and triadic interaction games, using verbal and non-verbal modes of interaction where children play computer games with KASPAR and/or other people. The interaction will be videotaped and showcased at the project website.

Demonstration of joint attention and common grounding use case 1 paradigm

Demonstration of joint attention and common grounding use case 1 paradigm where a group of children and FurHat robot interact about virtual and physical objects on an interactive table. This is a collaboration exercise that requires joint attention. The demonstration will be videotaped and showcased at the project website.

Web Site

Development of the project web site.

Publications

Tweester at SemEval-2017 Task 4: Fusion of Semantic-Affective and Pairwise Classification Models for Sentiment Analysis in Twitter

Auteurs: A. Kolovou, F. Kokkinos, A. Fergadis, P. Papalampidi, E. Iosif, N. Malandrakis, E. Palogiannidi, H. Papageorgiou, S. Narayanan and A. Potamianos
Publié dans: 11th International Workshop on Semantic Evaluation (SemEval), 2017
Éditeur: ACL Home Association for Computational Linguistics

Teaching a Robot how to Guide Attention in Child-Robot Learning Interactions

Auteurs: J. Hemminghaus, L. Hoffmann, and S. Kopp
Publié dans: 5th European and 8th Nordic Symposium on Multimodal Communication, 2017
Éditeur: UNIBI

Bio–inspired Meta–learning for Active Exploration During Non–stationary Multi–armed Bandit Tasks

Auteurs: G. Velentzas, M. Khamassi and C. Tzafestas
Publié dans: Intelligent Systems Conference (IntelliSys), 2017
Éditeur: -

Audio-based Distributional Semantic Models for Music Auto-tagging and Similarity Measurement

Auteurs: G. Karamanolakis, E. Iosif, A. Zlatintsi, A. Pikrakis and A. Potamianos
Publié dans: 25th EUSIPCO 2017, MultiLearn Workshop, 2017
Éditeur: EURASIP

Active Exploration and Parameterized Reinforcement Learning Applied to a Simulated Human-Robot Interaction Task

Auteurs: M. Khamassi, G. Velentzas, T. Tsitsimis and C. Tzafestas
Publié dans: IEEE International Conference on Robotic Computing (Robotic Computing 2017), 2017
Éditeur: IEEE

Photorealistic Adaptation and Interpolation of Facial Expressions Using HMMS and AAMS for Audio Visual Speech Synthesis

Auteurs: P. Filntisis, A. Katsamanis and Petros Maragos
Publié dans: Int'l Conf. Image Processing (ICIP-2017), 2017
Éditeur: IEEE SigPort, 2017

HMM-based Pathological Gait Analyzer for a User-Adaptive Intelligent Robotic Walker

Auteurs: G. Chalvatzaki, X. S. Papageorgiou, C. S. Tzafestas, and P. Maragos,
Publié dans: "25th European Signal Processing Conference – Workshop ""MultiLearn 2017 - Multimodal processing, modeling and learning for human-computer/robot interaction applications""", 2017
Éditeur: EURASIP

Utilising humanoid robots to assist children with autism learn about Visual Perspective Taking

Auteurs: L. Wood, B. Robins, K. Dautenhahn, G. Lakatos, D. Syrdal and A. Zaraki
Publié dans: The UK-RAS Network Conference On Robotics And Autonomous Systems: Robots Working For And Among Us, 2017
Éditeur: UK-RAS Network

Crowd-sourced design of artificial attentive listeners

Auteurs: O. Catharine, P. Jonell, D. Kontogiorgos, J. Mendelson, J. Beskow, and J. Gustafson
Publié dans: INTERSPEECH: Situated Interaction, 2017
Éditeur: ISCA

Sensory-Aware Multimodal Fusion for Word Semantic Similarity Estimation

Auteurs: G. Paraskevopoulos, G. Karamanolakis, E. Iosif, A. Pikrakis and A. Potamianos
Publié dans: "25th European Signal Processing Conference – Workshop ""MultiLearn 2017 - Multimodal processing, modeling and learning for human-computer/robot interaction applications", 2017
Éditeur: EURASIP

Design, Implementation and Experimental Evaluation of an IrisTK-Based Deliberative-Reactive Control Architecture for Autonomous Child-Robot Interaction in the Real-World Settings”.

Auteurs: A. Zaraki, Luke Wood, B. Robins, and K. Dautenhahn
Publié dans: The UK-RAS Network Conference On Robotics And Autonomous Systems: Robots Working For And Among Us, 2017
Éditeur: UK RAS network

How to Manage Affective State in Child-Robot Tutoring Interactions?

Auteurs: T. Schodde, L. Hoffmann, and S. Kopp, Presented at the
Publié dans: IEEE International Conference on Companion Technology (ICCT 2017), 2017
Éditeur: IEEE

Toward Autonomous Child-Robot Interaction: Development of an Interactive Architecture for the Humanoid Kaspar Robot

Auteurs: A. Zaraki, K. Dautenhahn, L. Wood, O. Novanda and B. Robins.
Publié dans: 3rd Workshop on Child-Robot Interaction (CRI2017) in International Conference on Human Robot Interaction (ACM/IEEE HRI 2017), 2017
Éditeur: -

Towards Adaptive Social Behavior Generation for Assistive Robots Using Reinforcement Learning

Auteurs: J. Hemminghaus and S. Kopp
Publié dans: International Conference on Human Robot Interaction (ACM/IEEE HRI 2017), 2017
Éditeur: ACM

Learning Nash Equilibrium for General-Sum Markov Games from Batch Data

Auteurs: J. Pérolat, F. Strub, B. Piot and O. Pietquin
Publié dans: 20th International Conference on Artificial Intelligence and Statistics (AISTATS), 2017
Éditeur: MLR

Lexical and Affective Models in Early Acquisition of Semantics

Auteurs: A. Kolovou, E. Iosif and A. Potamianos
Publié dans: 6th International Workshop on Child Computer Interaction (WOCCI), 2017
Éditeur: ISCA

The Iterative Development of the Humanoid Robot Kaspar: An Assistive Robot for Children with Autism

Auteurs: B. Robins, K. Dautenhahn
Publié dans: In Social Robotics: 9th International Conference, ICSR 2017, 2017, Page(s) Proceedings (Vol. 10652, p. 53).
Éditeur: Springer

Segment-based Speech Emotion Recognition Using Recurrent Neural Networks

Auteurs: E. Tzinis and A. Potamianos
Publié dans: International Conference on Affective Computing and Intelligent Interaction (ACII), 2017
Éditeur: IEEE

Predicting and Regulating Participation Equality in Human-robot Conversations: Effects of Age and Gender

Auteurs: G. Skantze
Publié dans: International Conference on Human-Robot Interaction (ACM/IEEE HRI 2017), 2017
Éditeur: ACM DIgital Library

Developing Interaction Scenarios with a Humanoid Robot to Encourage Visual Perspective Taking Skills in Children with Autism–Preliminary Proof of Concept Tests

Auteurs: B. Robins, K. Dautenhahn, L. Wood and A. Zaraki
Publié dans: In Social Robotics: 9th International Conference, ICSR 2017, 2017
Éditeur: Springer, Cham.

Towards a User-Adaptive Context-Aware Robotic Walker with a Pathological Gait Assessment System: First Experimental Study

Auteurs: G. Chalvatzaki, X. S. Papageorgiou, C. S. Tzafestas
Publié dans: International Conference on Intelligent Robotics, 2017
Éditeur: IEEE

"Morphological Perceptrons: Geometry and Training Algorithms."" In International Symposium on Mathematical Morphology and Its Applications to Signal and Image Processing"

Auteurs: V. Charisopoulos, and P. Maragos
Publié dans: 2017
Éditeur: Springer

Exploring ROI size in deep learning based lipreading

Auteurs: A. Koumparoulis, G. Potamianos, Y. Mroueh, and S. J. Rennie
Publié dans: Int. Conf. on Auditory-Visual Speech Process. (AVSP), 2017
Éditeur: AVSP

Developing child-robot interaction scenarios with a humanoid robot to assist children with autism

Auteurs: L. Wood, K. Dautenhahn, B. Robins, and A. Zaraki,
Publié dans: 26th IEEE International Symposium on Robots and Human Interactive Communication (Ro-Man), 2017
Éditeur: IEEE

Structural Attention Neural Networks for Improved Sentiment Analysis

Auteurs: F. Kokkinos and A. Potamianos
Publié dans: 15th Conference of the European Chapter of the Association for Computational Linguistics (EACL), 2017
Éditeur: ACL Anthology

Engagement Detection for Children with Autism Spectrum Disorder

Auteurs: A. Chorianopoulou, E. Tzinis, E. Iosif, A. Papoulidi, C. Papailiou and A. Potamianos.
Publié dans: IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2017
Éditeur: IEEE

Dialogue Act Semantic Representation and Classification Using Recurrent Neural Networks

Auteurs: P. Papalampidi, E. Iosif and A. Potamianos
Publié dans: 21st Workshop on the Semantics and Pragmatics of Dialogue (SemDial), 2017
Éditeur: ISCA

Multi3: Multi-Sensory Perception System for Multi-Modal Child Interaction with Multiple Robots

Auteurs: Antigoni Tsiami, Petros Koutras, Niki Efthymiou, Panagiotis Paraskevas Filntisis, Gerasimos Potamianos, Petros Maragos
Publié dans: 2018 IEEE International Conference on Robotics and Automation (ICRA), 2018, Page(s) 1-8, ISBN 978-1-5386-3081-5
Éditeur: IEEE
DOI: 10.1109/icra.2018.8461210

Development of a Semi-Autonomous Robotic System to Assist Children with Autism in Developing Visual Perspective Taking Skills

Auteurs: A. Zaraki, L.J. Wood, B. Robins, K. Dautenhahn
Publié dans: 27th IEEE International Symposium on Robot and Human Interactive Communication (Ro-Man 2018), 2018
Éditeur: IEEE

Actor-Critic Fictitious Play in Simultaneous Move Multistage Games

Auteurs: Pérolat , Julien; Piot , Bilal; Pietquin , Olivier
Publié dans: AISTATS 2018 - 21st International Conference on Artificial Intelligence and Statistics, Apr 2018, Playa Blanca, Lanzarote, Canary Islands, Spain, Numéro 1, 2018
Éditeur: AISTATS

A Novel Paradigm for Typically Developing and Autistic Children as Teachers to the Kaspar Robot Learner

Auteurs: A. Zaraki, M. Khamassi, L.J. Wood, G. Lakatos, C. Tzafestas, B. Robins, K. Dautenhahn
Publié dans: 3rd Workshop on Behavior Adaptation, Interaction and Learning for Assistive Robotics in Conjunction with IEEE Ro-Man, 2018
Éditeur: IEEE

Audio-visual speech activity detection in a two-speaker scenario incorporating depth information from a profile or frontal view

Auteurs: Spyridon Thermos, Gerasimos Potamianos
Publié dans: 2016 IEEE Spoken Language Technology Workshop (SLT), 2016, Page(s) 579-584, ISBN 978-1-5090-4903-5
Éditeur: IEEE
DOI: 10.1109/slt.2016.7846321

A Novel Paradigm for Typically Developing and Autistic Children as Teachers to the Kaspar Robot Learner

Auteurs: Abofazl Zaraki, Mehdi Khamassi, Luke Wood, Gabriella Lakatos, Costas Tzafestas, Ben Robins and Kerstin Dautenhahn
Publié dans: 3rd Workshop on Behavior Adaptation, Interaction and Learning for Assistive Robotics (BAILAR 2018), 2018
Éditeur: BAILAR

NTUA-SLP at SemEval-2018 Task 1: Predicting Affective Content in Tweets with Deep Attentive RNNs and Transfer Learning

Auteurs: Baziotis, Christos; Athanasiou, Nikos; Chronopoulou, Alexandra; Kolovou, Athanasia; Paraskevopoulos, Georgios; Ellinas, Nikolaos; Narayanan, Shrikanth; Potamianos, Alexandros
Publié dans: Numéro 6, 2018
Éditeur: arxiv

A Hybrid Approach to Hand Detection and Type Classification in Upper-Body Videos

Auteurs: K. Papadimitriou and G. Potamianos
Publié dans: 7th European Workshop on Visual Information Processing (EUVIP), 2018
Éditeur: IEEE

How an edutainment robot can help small children to study

Auteurs: Diana Trifon, Ana Maria Macovetchi, Mamoun Gharbi, Franziska Kirstein
Publié dans: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018
Éditeur: IEEE

NTUA-SLP at SemEval-2018 Task 2: Predicting Emojis using RNNs with Context-aware Attention

Auteurs: Baziotis, Christos; Athanasiou, Nikos; Paraskevopoulos, Georgios; Ellinas, Nikolaos; Kolovou, Athanasia; Potamianos, Alexandros
Publié dans: Numéro 1, 2018
Éditeur: arxiv

NTUA-SLP at SemEval-2018 Task 3: Tracking Ironic Tweets using Ensembles of Word and Character Level Attentive RNNs

Auteurs: Baziotis, Christos; Athanasiou, Nikos; Papalampidi, Pinelopi; Kolovou, Athanasia; Paraskevopoulos, Georgios; Ellinas, Nikolaos; Potamianos, Alexandros
Publié dans: Numéro 1, 2018
Éditeur: arxiv

An extended framework for robot learning during child-robot interaction with human engagement as reward signal

Auteurs: Mehdi Khamassi, Georgia Chalvatzaki, Theodore Tsitsimis, Georgios Velentzas, Costas S. Tzafestas
Publié dans: 3rd Workshop on Behavior Adaptation, Interaction and Learning for Assistive Robotics (BAILAR 2018), 2018
Éditeur: BAILAR

Multi- View Fusion for Action Recognition in Child-Robot Interaction

Auteurs: Niki Efthymiou, Petros Koutras, Panagiotis Paraskevas Filntisis, Gerasimos Potamianos, Petros Maragos
Publié dans: 2018 25th IEEE International Conference on Image Processing (ICIP), 2018, Page(s) 455-459, ISBN 978-1-4799-7061-2
Éditeur: IEEE
DOI: 10.1109/icip.2018.8451146

Audio-Visual Temporal Saliency Modeling Validated by fMRI Data

Auteurs: P. Koutras, G. Panagiotaropoulou, A. Tsiami and P. Maragos,
Publié dans: IEEE Conf. on Computer Vision and Pattern Recognition Workshops (CVPRW), 2018, 2018
Éditeur: CVPR 2018 open access

Far-Field Audio-Visual Scene Perception of Multi-Party Human-Robot Interaction for Children and Adults

Auteurs: Antigoni Tsiami, Panagiotis Paraskevas Filntisis, Niki Efthymiou, Petros Koutras, Gerasimos Potamianos, Petros Maragos
Publié dans: 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2018, Page(s) 6568-6572, ISBN 978-1-5386-4658-8
Éditeur: IEEE
DOI: 10.1109/icassp.2018.8462425

The Peculiarities of Robot Embodiment (EmCorp-Scale) - Development, Validation and Initial Test of the Embodiment and Corporeality of Artificial Agents Scale

Auteurs: Laura Hoffmann, Nikolai Bock, Astrid M. Rosenthal v.d. Pütten
Publié dans: Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction - HRI '18, 2018, Page(s) 370-378, ISBN 9781-450349536
Éditeur: ACM Press
DOI: 10.1145/3171221.3171242

Multimodal Visual Concept Learning with Weakly Supervised Techniques

Auteurs: G. Bouritsas, P. Koutras, A. Zlatintsi and P. Maragos
Publié dans: IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2018
Éditeur: openaccess.thecvf.com

Deep view2view mapping for view-invariant lipreading

Auteurs: A. Koumparoulis and G. Potamianos
Publié dans: 2018
Éditeur: IEEE

User-Adaptive Human-Robot Formation Control for an Intelligent Robotic Walker Using Augmented Human State Estimation and Pathological Gait Characterization

Auteurs: Georgia Chalvatzaki, Xanthi S. Papageorgiou, Petros Maragos, Costas S. Tzafestas
Publié dans: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018, Page(s) 6016-6022, ISBN 978-1-5386-8094-0
Éditeur: IEEE
DOI: 10.1109/iros.2018.8594360

Integrating Recurrence Dynamics for Speech Emotion Recognition

Auteurs: Efthymios Tzinis, Georgios Paraskevopoulos, Christos Baziotis, Alexandros Potamianos
Publié dans: Interspeech 2018, 2018, Page(s) 927-931
Éditeur: ISCA
DOI: 10.21437/interspeech.2018-1377

A Framework for Robot Learning During Child-Robot Interaction with Human Engagement as Reward Signal

Auteurs: M. Khamassi, G. Chalvatzaki, T. Tsitsimis, G. Velentzas, C. Tzafestas
Publié dans: 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), 2018, Page(s) 461-464, ISBN 978-1-5386-7980-7
Éditeur: IEEE
DOI: 10.1109/roman.2018.8525598

Neural Activation Semantic Models: Computational lexical semantic models of localized neural activations

Auteurs: N. Athanasiou, E. Iosif and A. Potamianos
Publié dans: 27th International Conference on Computational Linguistics (COLING), 2018
Éditeur: COLING

On data driven parametric backchannel synthesis for expressing attentiveness in conversational agents

Auteurs: Catharine Oertel, Joakim Gustafson, Alan W. Black
Publié dans: Proceedings of the Workshop on Multimodal Analyses enabling Artificial Agents in Human-Machine Interaction - MA3HMI '16, 2016, Page(s) 43-47, ISBN 9781-450345620
Éditeur: ACM Press
DOI: 10.1145/3011263.3011272

Object Assembly Guidance in Child-Robot Interaction using RGB-D based 3D Tracking

Auteurs: Jack Hadfield, Petros Koutras, Niki Efthymiou, Gerasimos Potamianos, Costas S. Tzafestas, Petros Maragos
Publié dans: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018, Page(s) 347-354, ISBN 978-1-5386-8094-0
Éditeur: IEEE
DOI: 10.1109/iros.2018.8594187

Batch Policy Iteration for Continuous Domains

Auteurs: B. Piot, M. Geist and O. Pietquin.
Publié dans: 13th European Workshop of Reinforcement Learning (EWRL)3-4 December 2016 , Barcelona, Spain., 2016
Éditeur: EWRL

Towards Building an Attentive Artificial Listener: On the Perception of Attentiveness in Audio-Visual Feedback Tokens

Auteurs: C. Oertel, J. Lopes, Y. Yu, K. Funes, J. Gustafson, A. Black and J-M. Odobez
Publié dans: Proceedings of the 18th ACM International Conference on Multimodal Interaction (ICMI), 2016
Éditeur: ACM

Softened Approximate Policy Iteration for Markov Games

Auteurs: Piot, Bilal; Pietquin, Olivier; Scherrer, Bruno; Geist, Matthieu; Pérolat, Julien
Publié dans: https://hal.inria.fr/hal-01393328, Numéro 1, 2016
Éditeur: JMLR

Experiences from Long-Term Implementation of Social Robots in Danish Educational Institutions

Auteurs: F. Kirstein and R. Risager
Publié dans: Co-Designing Children-Robot-Interaction Workshop at Robophilosophy, 2016
Éditeur: IOS Press

FMRI-Based Perceptual Validation of A Computational Model For Visual and Auditory Saliency in Videos

Auteurs: G. Panagiotaropoulou, P. Koutras, A. Katsamanis, P. Maragos, A. Zlatintsi, A. Protopapas, E. Karavasilis and N. Smyrnis
Publié dans: International Conference on Image Processing (ICIP), 2016
Éditeur: IEEE

Towards Building an Attentive Artificial Listener: On the Perception of Attentiveness in Feedback Utterances

Auteurs: C. Oertel, J. Gustafson and A. Black
Publié dans: Interspeech, 2016
Éditeur: ISCA

Speech Emotion Recognition Using Affective Saliency

Auteurs: A. Chorianopoulou, P. Koutsakis and A. Potamianos
Publié dans: Interspeech, 2016
Éditeur: ISCA

A Semantic-Affective Compositional Approach for the Affective Labelling of Adjective-Noun and Noun-Noun Pairs

Auteurs: E. Palogiannidi, E. Iosif, P. Koutsakis and A. Potamianos
Publié dans: 7th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, 2016
Éditeur: Association for Computational Linguistics

Improved Dictionary Selection and Detection Schemes in Sparse-CNMF-based Overlapping Acoustic Event Detection

Auteurs: P. Giannoulis, G. Potamianos, P. Maragos, and A. Katsamanis
Publié dans: Detection and Classification of Acoustic Scenes and Events Workshop, 2016
Éditeur: DCASE

Cognitively Motivated Distributional Representations of Meaning

Auteurs: E. Iosif, S. Georgiladakis and A. Potamianos
Publié dans: 10th Language Resources and Evaluation Conference, 2016
Éditeur: LREC

Tweester: Sentiment Analysis in Twitter using Semantic-Affective Model Adaptation

Auteurs: E. Palogiannidi, A. Kolovou, F. Christopoulou, F. Kokkinos, E. Iosif, N. Malandrakis, H. Papageorgiou, S. Narayanan and A. Potamianos
Publié dans: International Workshop on Semantic Evaluation, 2016
Éditeur: Association for Computational Linguistics

BabyRobot – Next Generation Social Robots: Enhancing Communication and Collaboration Development of TD and ASD Children by Developing and Commercially Exploiting the Next Generation of Human-Robot Interaction Technologies

Auteurs: A. Potamianos, C. Tzafestas, E. Iosif, F. Kirstein, P. Maragos, K. Dauthenhahn, J. Gustafson, J-E. Østergaard, S. Kopp, P. Wik, O. Pietquin and S. Al Moubayed
Publié dans: 2nd Workshop on Evaluating Child-Robot Interaction (CRI) at Human-Robot Interaction, 2016
Éditeur: CRI

Audio-based Distributional Representations of Meaning Using a Fusion of Feature Encodings

Auteurs: G. Karamanolakis, E. Iosif, A. Zlatintsi, A. Pikrakis and A. Potamianos
Publié dans: Interspeech, 2016
Éditeur: ISCA

Affective Lexicon Creation for the Greek Language

Auteurs: E. Palogiannidi, P. Koutsakis, E. Iosif and A. Potamianos
Publié dans: 10th Language Resources and Evaluation Conference, 2016
Éditeur: LREC

Crossmodal Network-Based Distributional Semantic Models

Auteurs: E. Iosif and A. Potamianos
Publié dans: 10th Language Resources and Evaluation Conference, 2016
Éditeur: LREC

Exploring End-Users’ Needs Towards Personal Robots for Education

Auteurs: F. Kirstein, T. Rubæk and J-E. Østergaard
Publié dans: Personal Robot Interaction Workshop at IROS, 2016
Éditeur: IROS

A Study of Value Iteration with Non-Stationary Strategies in General Sum Markov Games

Auteurs: J. Pérolat, B. Piot and O. Pietquin
Publié dans: NIPS Workshop on Learning, Inference and Control of Multi-Agent Systems, 2016
Éditeur: NIPS

Graph-Driven Diffusion and Random Walk Schemes for Image Segmentation

Auteurs: Christos G. Bampis, Petros Maragos, Alan C. Bovik
Publié dans: IEEE Transactions on Image Processing, Numéro 26/1, 2017, Page(s) 35-50, ISSN 1057-7149
Éditeur: Institute of Electrical and Electronics Engineers
DOI: 10.1109/TIP.2016.2621663

Theoretical Analysis of Active Contours on Graphs

Auteurs: C. Sakaridis, K. Drakopoulos, P. Maragos
Publié dans: SIAM Journal on Imaging Sciences (Society for Industrial and Applied Mathematics), 2017, ISSN 1936-4954
Éditeur: Society for Industrial and Applied Mathematics

Video-realistic expressive audio-visual speech synthesis for the Greek language

Auteurs: P. P. Filntisis, A. Katsamanis, P. Tsiakoulis, and P. Maragos
Publié dans: Speech Communication, 2017, ISSN 0167-6393
Éditeur: Elsevier BV

On the Joint Use of NMF and Classification for Overlapping Acoustic Event Detection

Auteurs: Panagiotis Giannoulis, Gerasimos Potamianos, Petros Maragos
Publié dans: Proceedings, Numéro 2/6, 2018, Page(s) 90, ISSN 2504-3900
Éditeur: MDPI AG
DOI: 10.3390/proceedings2020090

Dynamical systems on weighted lattices: general theory

Auteurs: P. Maragos
Publié dans: Journal of Mathematics of Control, Signals, and Systems, 2017, ISSN 0932-4194
Éditeur: Springer Verlag

Online Wideband Spectrum Sensing Using Sparsity

Auteurs: Lampros Flokas, Petros Maragos
Publié dans: IEEE Journal of Selected Topics in Signal Processing, Numéro 12/1, 2018, Page(s) 35-44, ISSN 1932-4553
Éditeur: Institute of Electrical and Electronics Engineers
DOI: 10.1109/jstsp.2018.2797422

Adaptive reinforcement learning with active state-specific exploration for engagement maximization during simulated child-robot interaction

Auteurs: George Velentzas, Theodore Tsitsimis, Iñaki Rañó, Costas Tzafestas, Mehdi Khamassi
Publié dans: Paladyn, Journal of Behavioral Robotics, Numéro 9/1, 2018, Page(s) 235-253, ISSN 2081-4836
Éditeur: Paladyn, Journal of Behavioral Robotics
DOI: 10.1515/pjbr-2018-0016

Augmented Human State Estimation Using Interacting Multiple Model Particle Filters With Probabilistic Data Association

Auteurs: Georgia Chalvatzaki, Xanthi S. Papageorgiou, Costas S. Tzafestas, Petros Maragos
Publié dans: IEEE Robotics and Automation Letters, Numéro 3/3, 2018, Page(s) 1872-1879, ISSN 2377-3766
Éditeur: IEEE
DOI: 10.1109/lra.2018.2800084

Robot Fast Adaptation to Changes in Human Engagement During Simulated Dynamic Social Interaction With Active Exploration in Parameterized Reinforcement Learning

Auteurs: Mehdi Khamassi, George Velentzas, Theodore Tsitsimis, Costas Tzafestas
Publié dans: IEEE Transactions on Cognitive and Developmental Systems, Numéro 10/4, 2018, Page(s) 881-893, ISSN 2379-8920
Éditeur: IEEE
DOI: 10.1109/tcds.2018.2843122

Multimodal gesture recognition

Auteurs: A. Katsamanis, V. Pitsikalis, S. Theodorakis, and P. Maragos
Publié dans: The Handbook of Multimodal-Multisensor Interfaces, Volume 1: Foundations, User Modeling, and Common Modality Combinatio, 2017
Éditeur: ACM Books / Morgan-Claypool Publishers

Audio and visual modality combination in speech processing applications

Auteurs: G. Potamianos, E. Marcheret, Y. Mroueh, V. Goel, A. Koumbaroulis, A. Vartholomaios, and S. Thermos
Publié dans: The Handbook of Multimodal-Multisensor Interfaces, Volume 1: Foundations, User Modeling, and Common Modality Combinations, 2017
Éditeur: ACM Books / Morgan-Claypool Publisher

Human-Centered Service Robotic Systems for Assisted Living

Auteurs: Xanthi S. Papageorgiou, Georgia Chalvatzaki, Athanasios C. Dometios, Costas S. Tzafestas
Publié dans: Advances in Service and Industrial Robotics - Proceedings of the 27th International Conference on Robotics in Alpe-Adria Danube Region (RAAD 2018), Numéro 67, 2019, Page(s) 132-140, ISBN 978-3-030-00231-2
Éditeur: Springer International Publishing
DOI: 10.1007/978-3-030-00232-9_14

Piloting Scenarios for Children with Autism to Learn About Visual Perspective Taking

Auteurs: Luke Jai Wood, Ben Robins, Gabriella Lakatos, Dag Sverre Syrdal, Abolfazl Zaraki, Kerstin Dautenhahn
Publié dans: Towards Autonomous Robotic Systems - 19th Annual Conference, TAROS 2018, Bristol, UK July 25-27, 2018, Proceedings, Numéro 10965, 2018, Page(s) 260-270, ISBN 978-3-319-96727-1
Éditeur: Springer International Publishing
DOI: 10.1007/978-3-319-96728-8_22

Challenges in Synchronized Behavior Realization for Different Robotic Embodiments

Auteurs: I. de Kok, J. Hemminghaus, and S. Kopp
Publié dans: 2017
Éditeur: UNIBI

A Deep Learning Approach for Multi-View Engagement Estimation of Children in a Child-Robot Joint Attention task

Auteurs: Jack Hadfield, Georgia Chalvatzaki, Petros Koutras, Mehdi Khamassi, Costas S. Tzafestas, and Petros Maragos
Publié dans: 2018
Éditeur: arxiv

Pattern Search Multidimensional Scaling

Auteurs: Paraskevopoulos, Georgios; Tzinis, Efthymios; Vlatakis-Gkaragkounis, Emmanuel-Vasileios; Potamianos, Alexandros
Publié dans: Numéro 1, 2018
Éditeur: Arxiv

Audio-based Distributional Semantic Models for Music Auto-tagging and Similarity Measurement

Auteurs: G. Karamanolakis, E. Iosif, A. Zlatintsi, A. Pikrakis and A. Potamianos
Publié dans: 2016
Éditeur: arxiv

Active Exploration in Parameterized Reinforcement Learning

Auteurs: M. Khamassi and C. Tzafestas
Publié dans: 2016
Éditeur: arXiv

Recherche de données OpenAIRE...

Une erreur s’est produite lors de la recherche de données OpenAIRE

Aucun résultat disponible