European Commission logo
italiano italiano
CORDIS - Risultati della ricerca dell’UE
CORDIS

Child-Robot Communication and Collaboration: Edutainment, Behavioural Modelling and Cognitive Development in Typically Developing and Autistic Spectrum Children

Risultati finali

Free data deliverable: gesture, speech and behavioral data

Free data deliverable consists of: 1) annotated gestural video data, 2) transcribed spoken dialogue (audio) data, 3) behavioural data labelled with affect and cognitive state, 4) audio-visual data annotated with actions and intent from the use cases of BabyRobot. The amount of data to be transcribed will be defined by M12 based on the needs for training and evaluating statistical models for user cases 1,2 and 3. Making audio-visual recording of children public raises many privacy-related, legal and ethical issues; for this purpose only secondary/derived data (annotations, transcriptions, audio/visual features etc.) will be made publicly available.

Open-source software for robot learning and behaviour based control

Open-source software for robot learning and behaviour based control will include algorithms for imitation learning based on inverse reinforcement learning, structured classification algorithms, as well as, a behaviour-based control module for the humanoid robot platforms, which will incorporate different levels of abstraction, including both symbolic and sub-symbolic layers.

Open-source software for multimodal, multiparty human-robot interaction

Open-source software for multimodal, multiparty human-robot interaction will include significant portions of the communication and collaboration modules (including the latest version of IRIS dialogue management software), as well as the majority of the codebase for KASPAR use-case 3.

Open-source software for socio-affective state monitoring and visual tracking

Open-source software for socio-affective state monitoring and visual tracking will include software for emotion recognition from speech, emotion recognition from text and visual tracking software from video.

Interim Progress Report on audio-visual processing and behavioural informatics

Interim progress report on the development of the core audio-visual processing technology for extracting information from the microphone and cameras on the robot. These technologies are meant for the creation of robots that analyze and track human behaviour over time in the context of their surroundings (situational) in order to establish common ground and intention-reading capabilities.

Initial Report on Dissemination, Exploitation and Intellectual Property

Initial report on the: (i) dissemination activities towards industry and academia, (ii) exploitation actions, (iii) management of intellectual property issues.

Interim Report on multiparty child-robot collaboration and learning

Interim report on the design, development, and evaluation of the multiparty communication and collaboration capabilities of the KASPAR robot. These capabilities will be grounded on child-child and child-robot interactive scenarios that constitute the third use case.

Interim Report on human-robot interaction and communication

Interim report on the design, development, and evaluation of the communicative and interaction capabilities of the robot using both the gestural and spoken dialogue modalities. These capabilities will be grounded on child-robot interactive scenarios that constitute the first use case.

Interim Progress Report on core robotic functionality

Interim progress report on the methods and software modules needed to endow the humanoid robot platforms with core functionalities, focusing mainly on: (i) gestural kinematics, (ii) environment interaction skills, (iii) imitation learning, (iv) behaviour-based robot control architecture.

System Architecture, Use Case 1 Specification and Data Collection Protocols

Definition of the architecture and functionality of the communication and interaction modules of the robotic platform. Definition of the interaction scenarios for use case 1. Definition of the data collection protocols for use case 1.

Final Report on child-robot communication and learning

Final report on the design, development, and evaluation of the communication and learning capabilities of the ZENO robot. These capabilities will be grounded on child-robot interactive scenarios that constitute the second use case.

Initial Progress Report on Core Robotic Functionality

Initial progress report on the methods and software modules needed to endow the humanoid robot platforms with core functionalities, focusing mainly on: (i) gestural kinematics, (ii) environment interaction skills, (iii) imitation learning, (iv) behaviour-based robot control architecture.

Final Report on multiparty child-robot collaboration and learning

Final report on the design, development, and evaluation of the multiparty communication and collaboration capabilities of the KASPAR robot. These capabilities will be grounded on child-child and child-robot interactive scenarios that constitute the third use case.

Final Report on core robotic functionality

Final report on the methods and software modules needed to endow the humanoid robot platforms with core functionalities, focusing mainly on: (i) gestural kinematics, (ii) environment interaction skills, (iii) imitation learning, (iv) behaviour-based robot control architecture.

Final Report on human-robot interaction and communication

Final report on the design, development, and evaluation of the communicative and interaction capabilities of the robot using both the gestural and spoken dialogue modalities. These capabilities will be grounded on child-robot interactive scenarios that constitute the first use case.

Initial Report on Child-Robot Communication and Learning

Initial report on the design, development, and evaluation of the communication and learning capabilities of the ZENO robot. These capabilities will be grounded on child-robot interactive scenarios that constitute the second use case.

Final Report on audio-visual processing and behavioral informatics

Final report on the development of the core audio-visual processing technology for extracting information from the microphone and cameras on the robot. These technologies are meant for the creation of robots that analyze and track human behaviour over time in the context of their surroundings (situational) in order to establish common ground and intention-reading capabilities.

Initial Report on Human-Robot Interaction and Communication

Initial report on the design, development, and evaluation of the communicative and interaction capabilities of the robot using both the gestural and spoken dialogue modalities. These capabilities will be grounded on child-robot interactive scenarios that constitute the first use case.

Use Case 2 Specification and Data Collection Protocols

Definition of the communication, collaboration, and language learning tasks of use case 2. Definition of the data collection protocols for use case 2.

Initial Progress Report on Audio-Visual Processing and Behavioral Informatics

Initial progress report on the development of the core audio-visual processing technology for extracting information from the microphone and cameras on the robot. These technologies are meant for the creation of robots that analyze and track human behaviour over time in the context of their surroundings (situational) in order to establish common ground and intention-reading capabilities.

Interim Report on child-robot communication and learning

Interim report on the design, development, and evaluation of the communication and learning capabilities of the ZENO robot. These capabilities will be grounded on child-robot interactive scenarios that constitute the second use case.

Interim Report on Dissemination, Exploitation and Intellectual Property

Interim report on the: (i) dissemination activities towards industry and academia, (ii) exploitation actions, (iii) management of intellectual property issues.

Final Report on Dissemination, Exploitation and Intellectual Property

Final report on the: (i) dissemination activities towards industry and academia, (ii) exploitation actions, (iii) management of intellectual property issues.

Updated Dissemination and Exploitation Plan

Updated dissemination plan in order to establish the means and procedures to communicate the scientific and technical advancements of the project. Updated exploitation plan for all the technologies, data pool, robotic platform and services developed in the project.

Use Case 3 Specification and Initial Report on multiparty child-robot collaboration and learning

Specification of use case 3 using the KASPAR robot. Specification of data collection and analysis. Initial report on the design, development, and evaluation of the multiparty communication and collaboration capabilities of the KASPAR robot. These capabilities will be grounded on child-child and child-robot interactive scenarios that constitute the third use case.

Demonstration of communication skills learning for TD and ASD children use case 2 scenario

Demonstration of communication skills learning for TD and ASD children use case 2 scenario where the ZENO robot can “play” with the child both in a structured way, but naturally also in a creative sense, demonstrating that the robot and child can share intentions towards a common goal. The robot shall support the child in solving tasks in building specific things with bricks. The interaction will be videotaped and showcases at the project website.

Demonstration of communication skills learning use case 3 scenario

Demonstration of communication skills learning use case 3 scenario where we study learning and communication of children with ASD in dyadic and triadic interaction games, using verbal and non-verbal modes of interaction where children play computer games with KASPAR and/or other people. The interaction will be videotaped and showcased at the project website.

Demonstration of joint attention and common grounding use case 1 paradigm

Demonstration of joint attention and common grounding use case 1 paradigm where a group of children and FurHat robot interact about virtual and physical objects on an interactive table. This is a collaboration exercise that requires joint attention. The demonstration will be videotaped and showcased at the project website.

Web Site

Development of the project web site.

Pubblicazioni

Tweester at SemEval-2017 Task 4: Fusion of Semantic-Affective and Pairwise Classification Models for Sentiment Analysis in Twitter

Autori: A. Kolovou, F. Kokkinos, A. Fergadis, P. Papalampidi, E. Iosif, N. Malandrakis, E. Palogiannidi, H. Papageorgiou, S. Narayanan and A. Potamianos
Pubblicato in: 11th International Workshop on Semantic Evaluation (SemEval), 2017
Editore: ACL Home Association for Computational Linguistics

Teaching a Robot how to Guide Attention in Child-Robot Learning Interactions

Autori: J. Hemminghaus, L. Hoffmann, and S. Kopp
Pubblicato in: 5th European and 8th Nordic Symposium on Multimodal Communication, 2017
Editore: UNIBI

Bio–inspired Meta–learning for Active Exploration During Non–stationary Multi–armed Bandit Tasks

Autori: G. Velentzas, M. Khamassi and C. Tzafestas
Pubblicato in: Intelligent Systems Conference (IntelliSys), 2017
Editore: -

Audio-based Distributional Semantic Models for Music Auto-tagging and Similarity Measurement

Autori: G. Karamanolakis, E. Iosif, A. Zlatintsi, A. Pikrakis and A. Potamianos
Pubblicato in: 25th EUSIPCO 2017, MultiLearn Workshop, 2017
Editore: EURASIP

Active Exploration and Parameterized Reinforcement Learning Applied to a Simulated Human-Robot Interaction Task

Autori: M. Khamassi, G. Velentzas, T. Tsitsimis and C. Tzafestas
Pubblicato in: IEEE International Conference on Robotic Computing (Robotic Computing 2017), 2017
Editore: IEEE

Photorealistic Adaptation and Interpolation of Facial Expressions Using HMMS and AAMS for Audio Visual Speech Synthesis

Autori: P. Filntisis, A. Katsamanis and Petros Maragos
Pubblicato in: Int'l Conf. Image Processing (ICIP-2017), 2017
Editore: IEEE SigPort, 2017

HMM-based Pathological Gait Analyzer for a User-Adaptive Intelligent Robotic Walker

Autori: G. Chalvatzaki, X. S. Papageorgiou, C. S. Tzafestas, and P. Maragos,
Pubblicato in: "25th European Signal Processing Conference – Workshop ""MultiLearn 2017 - Multimodal processing, modeling and learning for human-computer/robot interaction applications""", 2017
Editore: EURASIP

Utilising humanoid robots to assist children with autism learn about Visual Perspective Taking

Autori: L. Wood, B. Robins, K. Dautenhahn, G. Lakatos, D. Syrdal and A. Zaraki
Pubblicato in: The UK-RAS Network Conference On Robotics And Autonomous Systems: Robots Working For And Among Us, 2017
Editore: UK-RAS Network

Crowd-sourced design of artificial attentive listeners

Autori: O. Catharine, P. Jonell, D. Kontogiorgos, J. Mendelson, J. Beskow, and J. Gustafson
Pubblicato in: INTERSPEECH: Situated Interaction, 2017
Editore: ISCA

Sensory-Aware Multimodal Fusion for Word Semantic Similarity Estimation

Autori: G. Paraskevopoulos, G. Karamanolakis, E. Iosif, A. Pikrakis and A. Potamianos
Pubblicato in: "25th European Signal Processing Conference – Workshop ""MultiLearn 2017 - Multimodal processing, modeling and learning for human-computer/robot interaction applications", 2017
Editore: EURASIP

Design, Implementation and Experimental Evaluation of an IrisTK-Based Deliberative-Reactive Control Architecture for Autonomous Child-Robot Interaction in the Real-World Settings”.

Autori: A. Zaraki, Luke Wood, B. Robins, and K. Dautenhahn
Pubblicato in: The UK-RAS Network Conference On Robotics And Autonomous Systems: Robots Working For And Among Us, 2017
Editore: UK RAS network

How to Manage Affective State in Child-Robot Tutoring Interactions?

Autori: T. Schodde, L. Hoffmann, and S. Kopp, Presented at the
Pubblicato in: IEEE International Conference on Companion Technology (ICCT 2017), 2017
Editore: IEEE

Toward Autonomous Child-Robot Interaction: Development of an Interactive Architecture for the Humanoid Kaspar Robot

Autori: A. Zaraki, K. Dautenhahn, L. Wood, O. Novanda and B. Robins.
Pubblicato in: 3rd Workshop on Child-Robot Interaction (CRI2017) in International Conference on Human Robot Interaction (ACM/IEEE HRI 2017), 2017
Editore: -

Towards Adaptive Social Behavior Generation for Assistive Robots Using Reinforcement Learning

Autori: J. Hemminghaus and S. Kopp
Pubblicato in: International Conference on Human Robot Interaction (ACM/IEEE HRI 2017), 2017
Editore: ACM

Learning Nash Equilibrium for General-Sum Markov Games from Batch Data

Autori: J. Pérolat, F. Strub, B. Piot and O. Pietquin
Pubblicato in: 20th International Conference on Artificial Intelligence and Statistics (AISTATS), 2017
Editore: MLR

Lexical and Affective Models in Early Acquisition of Semantics

Autori: A. Kolovou, E. Iosif and A. Potamianos
Pubblicato in: 6th International Workshop on Child Computer Interaction (WOCCI), 2017
Editore: ISCA

The Iterative Development of the Humanoid Robot Kaspar: An Assistive Robot for Children with Autism

Autori: B. Robins, K. Dautenhahn
Pubblicato in: In Social Robotics: 9th International Conference, ICSR 2017, 2017, Pagina/e Proceedings (Vol. 10652, p. 53).
Editore: Springer

Segment-based Speech Emotion Recognition Using Recurrent Neural Networks

Autori: E. Tzinis and A. Potamianos
Pubblicato in: International Conference on Affective Computing and Intelligent Interaction (ACII), 2017
Editore: IEEE

Predicting and Regulating Participation Equality in Human-robot Conversations: Effects of Age and Gender

Autori: G. Skantze
Pubblicato in: International Conference on Human-Robot Interaction (ACM/IEEE HRI 2017), 2017
Editore: ACM DIgital Library

Developing Interaction Scenarios with a Humanoid Robot to Encourage Visual Perspective Taking Skills in Children with Autism–Preliminary Proof of Concept Tests

Autori: B. Robins, K. Dautenhahn, L. Wood and A. Zaraki
Pubblicato in: In Social Robotics: 9th International Conference, ICSR 2017, 2017
Editore: Springer, Cham.

Towards a User-Adaptive Context-Aware Robotic Walker with a Pathological Gait Assessment System: First Experimental Study

Autori: G. Chalvatzaki, X. S. Papageorgiou, C. S. Tzafestas
Pubblicato in: International Conference on Intelligent Robotics, 2017
Editore: IEEE

"Morphological Perceptrons: Geometry and Training Algorithms."" In International Symposium on Mathematical Morphology and Its Applications to Signal and Image Processing"

Autori: V. Charisopoulos, and P. Maragos
Pubblicato in: 2017
Editore: Springer

Exploring ROI size in deep learning based lipreading

Autori: A. Koumparoulis, G. Potamianos, Y. Mroueh, and S. J. Rennie
Pubblicato in: Int. Conf. on Auditory-Visual Speech Process. (AVSP), 2017
Editore: AVSP

Developing child-robot interaction scenarios with a humanoid robot to assist children with autism

Autori: L. Wood, K. Dautenhahn, B. Robins, and A. Zaraki,
Pubblicato in: 26th IEEE International Symposium on Robots and Human Interactive Communication (Ro-Man), 2017
Editore: IEEE

Structural Attention Neural Networks for Improved Sentiment Analysis

Autori: F. Kokkinos and A. Potamianos
Pubblicato in: 15th Conference of the European Chapter of the Association for Computational Linguistics (EACL), 2017
Editore: ACL Anthology

Engagement Detection for Children with Autism Spectrum Disorder

Autori: A. Chorianopoulou, E. Tzinis, E. Iosif, A. Papoulidi, C. Papailiou and A. Potamianos.
Pubblicato in: IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2017
Editore: IEEE

Dialogue Act Semantic Representation and Classification Using Recurrent Neural Networks

Autori: P. Papalampidi, E. Iosif and A. Potamianos
Pubblicato in: 21st Workshop on the Semantics and Pragmatics of Dialogue (SemDial), 2017
Editore: ISCA

Multi3: Multi-Sensory Perception System for Multi-Modal Child Interaction with Multiple Robots

Autori: Antigoni Tsiami, Petros Koutras, Niki Efthymiou, Panagiotis Paraskevas Filntisis, Gerasimos Potamianos, Petros Maragos
Pubblicato in: 2018 IEEE International Conference on Robotics and Automation (ICRA), 2018, Pagina/e 1-8, ISBN 978-1-5386-3081-5
Editore: IEEE
DOI: 10.1109/icra.2018.8461210

Development of a Semi-Autonomous Robotic System to Assist Children with Autism in Developing Visual Perspective Taking Skills

Autori: A. Zaraki, L.J. Wood, B. Robins, K. Dautenhahn
Pubblicato in: 27th IEEE International Symposium on Robot and Human Interactive Communication (Ro-Man 2018), 2018
Editore: IEEE

Actor-Critic Fictitious Play in Simultaneous Move Multistage Games

Autori: Pérolat , Julien; Piot , Bilal; Pietquin , Olivier
Pubblicato in: AISTATS 2018 - 21st International Conference on Artificial Intelligence and Statistics, Apr 2018, Playa Blanca, Lanzarote, Canary Islands, Spain, Numero 1, 2018
Editore: AISTATS

A Novel Paradigm for Typically Developing and Autistic Children as Teachers to the Kaspar Robot Learner

Autori: A. Zaraki, M. Khamassi, L.J. Wood, G. Lakatos, C. Tzafestas, B. Robins, K. Dautenhahn
Pubblicato in: 3rd Workshop on Behavior Adaptation, Interaction and Learning for Assistive Robotics in Conjunction with IEEE Ro-Man, 2018
Editore: IEEE

Audio-visual speech activity detection in a two-speaker scenario incorporating depth information from a profile or frontal view

Autori: Spyridon Thermos, Gerasimos Potamianos
Pubblicato in: 2016 IEEE Spoken Language Technology Workshop (SLT), 2016, Pagina/e 579-584, ISBN 978-1-5090-4903-5
Editore: IEEE
DOI: 10.1109/slt.2016.7846321

A Novel Paradigm for Typically Developing and Autistic Children as Teachers to the Kaspar Robot Learner

Autori: Abofazl Zaraki, Mehdi Khamassi, Luke Wood, Gabriella Lakatos, Costas Tzafestas, Ben Robins and Kerstin Dautenhahn
Pubblicato in: 3rd Workshop on Behavior Adaptation, Interaction and Learning for Assistive Robotics (BAILAR 2018), 2018
Editore: BAILAR

NTUA-SLP at SemEval-2018 Task 1: Predicting Affective Content in Tweets with Deep Attentive RNNs and Transfer Learning

Autori: Baziotis, Christos; Athanasiou, Nikos; Chronopoulou, Alexandra; Kolovou, Athanasia; Paraskevopoulos, Georgios; Ellinas, Nikolaos; Narayanan, Shrikanth; Potamianos, Alexandros
Pubblicato in: Numero 6, 2018
Editore: arxiv

A Hybrid Approach to Hand Detection and Type Classification in Upper-Body Videos

Autori: K. Papadimitriou and G. Potamianos
Pubblicato in: 7th European Workshop on Visual Information Processing (EUVIP), 2018
Editore: IEEE

How an edutainment robot can help small children to study

Autori: Diana Trifon, Ana Maria Macovetchi, Mamoun Gharbi, Franziska Kirstein
Pubblicato in: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018
Editore: IEEE

NTUA-SLP at SemEval-2018 Task 2: Predicting Emojis using RNNs with Context-aware Attention

Autori: Baziotis, Christos; Athanasiou, Nikos; Paraskevopoulos, Georgios; Ellinas, Nikolaos; Kolovou, Athanasia; Potamianos, Alexandros
Pubblicato in: Numero 1, 2018
Editore: arxiv

NTUA-SLP at SemEval-2018 Task 3: Tracking Ironic Tweets using Ensembles of Word and Character Level Attentive RNNs

Autori: Baziotis, Christos; Athanasiou, Nikos; Papalampidi, Pinelopi; Kolovou, Athanasia; Paraskevopoulos, Georgios; Ellinas, Nikolaos; Potamianos, Alexandros
Pubblicato in: Numero 1, 2018
Editore: arxiv

An extended framework for robot learning during child-robot interaction with human engagement as reward signal

Autori: Mehdi Khamassi, Georgia Chalvatzaki, Theodore Tsitsimis, Georgios Velentzas, Costas S. Tzafestas
Pubblicato in: 3rd Workshop on Behavior Adaptation, Interaction and Learning for Assistive Robotics (BAILAR 2018), 2018
Editore: BAILAR

Multi- View Fusion for Action Recognition in Child-Robot Interaction

Autori: Niki Efthymiou, Petros Koutras, Panagiotis Paraskevas Filntisis, Gerasimos Potamianos, Petros Maragos
Pubblicato in: 2018 25th IEEE International Conference on Image Processing (ICIP), 2018, Pagina/e 455-459, ISBN 978-1-4799-7061-2
Editore: IEEE
DOI: 10.1109/icip.2018.8451146

Audio-Visual Temporal Saliency Modeling Validated by fMRI Data

Autori: P. Koutras, G. Panagiotaropoulou, A. Tsiami and P. Maragos,
Pubblicato in: IEEE Conf. on Computer Vision and Pattern Recognition Workshops (CVPRW), 2018, 2018
Editore: CVPR 2018 open access

Far-Field Audio-Visual Scene Perception of Multi-Party Human-Robot Interaction for Children and Adults

Autori: Antigoni Tsiami, Panagiotis Paraskevas Filntisis, Niki Efthymiou, Petros Koutras, Gerasimos Potamianos, Petros Maragos
Pubblicato in: 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2018, Pagina/e 6568-6572, ISBN 978-1-5386-4658-8
Editore: IEEE
DOI: 10.1109/icassp.2018.8462425

The Peculiarities of Robot Embodiment (EmCorp-Scale) - Development, Validation and Initial Test of the Embodiment and Corporeality of Artificial Agents Scale

Autori: Laura Hoffmann, Nikolai Bock, Astrid M. Rosenthal v.d. Pütten
Pubblicato in: Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction - HRI '18, 2018, Pagina/e 370-378, ISBN 9781-450349536
Editore: ACM Press
DOI: 10.1145/3171221.3171242

Multimodal Visual Concept Learning with Weakly Supervised Techniques

Autori: G. Bouritsas, P. Koutras, A. Zlatintsi and P. Maragos
Pubblicato in: IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2018
Editore: openaccess.thecvf.com

Deep view2view mapping for view-invariant lipreading

Autori: A. Koumparoulis and G. Potamianos
Pubblicato in: 2018
Editore: IEEE

User-Adaptive Human-Robot Formation Control for an Intelligent Robotic Walker Using Augmented Human State Estimation and Pathological Gait Characterization

Autori: Georgia Chalvatzaki, Xanthi S. Papageorgiou, Petros Maragos, Costas S. Tzafestas
Pubblicato in: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018, Pagina/e 6016-6022, ISBN 978-1-5386-8094-0
Editore: IEEE
DOI: 10.1109/iros.2018.8594360

Integrating Recurrence Dynamics for Speech Emotion Recognition

Autori: Efthymios Tzinis, Georgios Paraskevopoulos, Christos Baziotis, Alexandros Potamianos
Pubblicato in: Interspeech 2018, 2018, Pagina/e 927-931
Editore: ISCA
DOI: 10.21437/interspeech.2018-1377

A Framework for Robot Learning During Child-Robot Interaction with Human Engagement as Reward Signal

Autori: M. Khamassi, G. Chalvatzaki, T. Tsitsimis, G. Velentzas, C. Tzafestas
Pubblicato in: 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), 2018, Pagina/e 461-464, ISBN 978-1-5386-7980-7
Editore: IEEE
DOI: 10.1109/roman.2018.8525598

Neural Activation Semantic Models: Computational lexical semantic models of localized neural activations

Autori: N. Athanasiou, E. Iosif and A. Potamianos
Pubblicato in: 27th International Conference on Computational Linguistics (COLING), 2018
Editore: COLING

On data driven parametric backchannel synthesis for expressing attentiveness in conversational agents

Autori: Catharine Oertel, Joakim Gustafson, Alan W. Black
Pubblicato in: Proceedings of the Workshop on Multimodal Analyses enabling Artificial Agents in Human-Machine Interaction - MA3HMI '16, 2016, Pagina/e 43-47, ISBN 9781-450345620
Editore: ACM Press
DOI: 10.1145/3011263.3011272

Object Assembly Guidance in Child-Robot Interaction using RGB-D based 3D Tracking

Autori: Jack Hadfield, Petros Koutras, Niki Efthymiou, Gerasimos Potamianos, Costas S. Tzafestas, Petros Maragos
Pubblicato in: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018, Pagina/e 347-354, ISBN 978-1-5386-8094-0
Editore: IEEE
DOI: 10.1109/iros.2018.8594187

Batch Policy Iteration for Continuous Domains

Autori: B. Piot, M. Geist and O. Pietquin.
Pubblicato in: 13th European Workshop of Reinforcement Learning (EWRL)3-4 December 2016 , Barcelona, Spain., 2016
Editore: EWRL

Towards Building an Attentive Artificial Listener: On the Perception of Attentiveness in Audio-Visual Feedback Tokens

Autori: C. Oertel, J. Lopes, Y. Yu, K. Funes, J. Gustafson, A. Black and J-M. Odobez
Pubblicato in: Proceedings of the 18th ACM International Conference on Multimodal Interaction (ICMI), 2016
Editore: ACM

Softened Approximate Policy Iteration for Markov Games

Autori: Piot, Bilal; Pietquin, Olivier; Scherrer, Bruno; Geist, Matthieu; Pérolat, Julien
Pubblicato in: https://hal.inria.fr/hal-01393328, Numero 1, 2016
Editore: JMLR

Experiences from Long-Term Implementation of Social Robots in Danish Educational Institutions

Autori: F. Kirstein and R. Risager
Pubblicato in: Co-Designing Children-Robot-Interaction Workshop at Robophilosophy, 2016
Editore: IOS Press

FMRI-Based Perceptual Validation of A Computational Model For Visual and Auditory Saliency in Videos

Autori: G. Panagiotaropoulou, P. Koutras, A. Katsamanis, P. Maragos, A. Zlatintsi, A. Protopapas, E. Karavasilis and N. Smyrnis
Pubblicato in: International Conference on Image Processing (ICIP), 2016
Editore: IEEE

Towards Building an Attentive Artificial Listener: On the Perception of Attentiveness in Feedback Utterances

Autori: C. Oertel, J. Gustafson and A. Black
Pubblicato in: Interspeech, 2016
Editore: ISCA

Speech Emotion Recognition Using Affective Saliency

Autori: A. Chorianopoulou, P. Koutsakis and A. Potamianos
Pubblicato in: Interspeech, 2016
Editore: ISCA

A Semantic-Affective Compositional Approach for the Affective Labelling of Adjective-Noun and Noun-Noun Pairs

Autori: E. Palogiannidi, E. Iosif, P. Koutsakis and A. Potamianos
Pubblicato in: 7th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, 2016
Editore: Association for Computational Linguistics

Improved Dictionary Selection and Detection Schemes in Sparse-CNMF-based Overlapping Acoustic Event Detection

Autori: P. Giannoulis, G. Potamianos, P. Maragos, and A. Katsamanis
Pubblicato in: Detection and Classification of Acoustic Scenes and Events Workshop, 2016
Editore: DCASE

Cognitively Motivated Distributional Representations of Meaning

Autori: E. Iosif, S. Georgiladakis and A. Potamianos
Pubblicato in: 10th Language Resources and Evaluation Conference, 2016
Editore: LREC

Tweester: Sentiment Analysis in Twitter using Semantic-Affective Model Adaptation

Autori: E. Palogiannidi, A. Kolovou, F. Christopoulou, F. Kokkinos, E. Iosif, N. Malandrakis, H. Papageorgiou, S. Narayanan and A. Potamianos
Pubblicato in: International Workshop on Semantic Evaluation, 2016
Editore: Association for Computational Linguistics

BabyRobot – Next Generation Social Robots: Enhancing Communication and Collaboration Development of TD and ASD Children by Developing and Commercially Exploiting the Next Generation of Human-Robot Interaction Technologies

Autori: A. Potamianos, C. Tzafestas, E. Iosif, F. Kirstein, P. Maragos, K. Dauthenhahn, J. Gustafson, J-E. Østergaard, S. Kopp, P. Wik, O. Pietquin and S. Al Moubayed
Pubblicato in: 2nd Workshop on Evaluating Child-Robot Interaction (CRI) at Human-Robot Interaction, 2016
Editore: CRI

Audio-based Distributional Representations of Meaning Using a Fusion of Feature Encodings

Autori: G. Karamanolakis, E. Iosif, A. Zlatintsi, A. Pikrakis and A. Potamianos
Pubblicato in: Interspeech, 2016
Editore: ISCA

Affective Lexicon Creation for the Greek Language

Autori: E. Palogiannidi, P. Koutsakis, E. Iosif and A. Potamianos
Pubblicato in: 10th Language Resources and Evaluation Conference, 2016
Editore: LREC

Crossmodal Network-Based Distributional Semantic Models

Autori: E. Iosif and A. Potamianos
Pubblicato in: 10th Language Resources and Evaluation Conference, 2016
Editore: LREC

Exploring End-Users’ Needs Towards Personal Robots for Education

Autori: F. Kirstein, T. Rubæk and J-E. Østergaard
Pubblicato in: Personal Robot Interaction Workshop at IROS, 2016
Editore: IROS

A Study of Value Iteration with Non-Stationary Strategies in General Sum Markov Games

Autori: J. Pérolat, B. Piot and O. Pietquin
Pubblicato in: NIPS Workshop on Learning, Inference and Control of Multi-Agent Systems, 2016
Editore: NIPS

Graph-Driven Diffusion and Random Walk Schemes for Image Segmentation

Autori: Christos G. Bampis, Petros Maragos, Alan C. Bovik
Pubblicato in: IEEE Transactions on Image Processing, Numero 26/1, 2017, Pagina/e 35-50, ISSN 1057-7149
Editore: Institute of Electrical and Electronics Engineers
DOI: 10.1109/TIP.2016.2621663

Theoretical Analysis of Active Contours on Graphs

Autori: C. Sakaridis, K. Drakopoulos, P. Maragos
Pubblicato in: SIAM Journal on Imaging Sciences (Society for Industrial and Applied Mathematics), 2017, ISSN 1936-4954
Editore: Society for Industrial and Applied Mathematics

Video-realistic expressive audio-visual speech synthesis for the Greek language

Autori: P. P. Filntisis, A. Katsamanis, P. Tsiakoulis, and P. Maragos
Pubblicato in: Speech Communication, 2017, ISSN 0167-6393
Editore: Elsevier BV

On the Joint Use of NMF and Classification for Overlapping Acoustic Event Detection

Autori: Panagiotis Giannoulis, Gerasimos Potamianos, Petros Maragos
Pubblicato in: Proceedings, Numero 2/6, 2018, Pagina/e 90, ISSN 2504-3900
Editore: MDPI AG
DOI: 10.3390/proceedings2020090

Dynamical systems on weighted lattices: general theory

Autori: P. Maragos
Pubblicato in: Journal of Mathematics of Control, Signals, and Systems, 2017, ISSN 0932-4194
Editore: Springer Verlag

Online Wideband Spectrum Sensing Using Sparsity

Autori: Lampros Flokas, Petros Maragos
Pubblicato in: IEEE Journal of Selected Topics in Signal Processing, Numero 12/1, 2018, Pagina/e 35-44, ISSN 1932-4553
Editore: Institute of Electrical and Electronics Engineers
DOI: 10.1109/jstsp.2018.2797422

Adaptive reinforcement learning with active state-specific exploration for engagement maximization during simulated child-robot interaction

Autori: George Velentzas, Theodore Tsitsimis, Iñaki Rañó, Costas Tzafestas, Mehdi Khamassi
Pubblicato in: Paladyn, Journal of Behavioral Robotics, Numero 9/1, 2018, Pagina/e 235-253, ISSN 2081-4836
Editore: Paladyn, Journal of Behavioral Robotics
DOI: 10.1515/pjbr-2018-0016

Augmented Human State Estimation Using Interacting Multiple Model Particle Filters With Probabilistic Data Association

Autori: Georgia Chalvatzaki, Xanthi S. Papageorgiou, Costas S. Tzafestas, Petros Maragos
Pubblicato in: IEEE Robotics and Automation Letters, Numero 3/3, 2018, Pagina/e 1872-1879, ISSN 2377-3766
Editore: IEEE
DOI: 10.1109/lra.2018.2800084

Robot Fast Adaptation to Changes in Human Engagement During Simulated Dynamic Social Interaction With Active Exploration in Parameterized Reinforcement Learning

Autori: Mehdi Khamassi, George Velentzas, Theodore Tsitsimis, Costas Tzafestas
Pubblicato in: IEEE Transactions on Cognitive and Developmental Systems, Numero 10/4, 2018, Pagina/e 881-893, ISSN 2379-8920
Editore: IEEE
DOI: 10.1109/tcds.2018.2843122

Multimodal gesture recognition

Autori: A. Katsamanis, V. Pitsikalis, S. Theodorakis, and P. Maragos
Pubblicato in: The Handbook of Multimodal-Multisensor Interfaces, Volume 1: Foundations, User Modeling, and Common Modality Combinatio, 2017
Editore: ACM Books / Morgan-Claypool Publishers

Audio and visual modality combination in speech processing applications

Autori: G. Potamianos, E. Marcheret, Y. Mroueh, V. Goel, A. Koumbaroulis, A. Vartholomaios, and S. Thermos
Pubblicato in: The Handbook of Multimodal-Multisensor Interfaces, Volume 1: Foundations, User Modeling, and Common Modality Combinations, 2017
Editore: ACM Books / Morgan-Claypool Publisher

Human-Centered Service Robotic Systems for Assisted Living

Autori: Xanthi S. Papageorgiou, Georgia Chalvatzaki, Athanasios C. Dometios, Costas S. Tzafestas
Pubblicato in: Advances in Service and Industrial Robotics - Proceedings of the 27th International Conference on Robotics in Alpe-Adria Danube Region (RAAD 2018), Numero 67, 2019, Pagina/e 132-140, ISBN 978-3-030-00231-2
Editore: Springer International Publishing
DOI: 10.1007/978-3-030-00232-9_14

Piloting Scenarios for Children with Autism to Learn About Visual Perspective Taking

Autori: Luke Jai Wood, Ben Robins, Gabriella Lakatos, Dag Sverre Syrdal, Abolfazl Zaraki, Kerstin Dautenhahn
Pubblicato in: Towards Autonomous Robotic Systems - 19th Annual Conference, TAROS 2018, Bristol, UK July 25-27, 2018, Proceedings, Numero 10965, 2018, Pagina/e 260-270, ISBN 978-3-319-96727-1
Editore: Springer International Publishing
DOI: 10.1007/978-3-319-96728-8_22

Challenges in Synchronized Behavior Realization for Different Robotic Embodiments

Autori: I. de Kok, J. Hemminghaus, and S. Kopp
Pubblicato in: 2017
Editore: UNIBI

A Deep Learning Approach for Multi-View Engagement Estimation of Children in a Child-Robot Joint Attention task

Autori: Jack Hadfield, Georgia Chalvatzaki, Petros Koutras, Mehdi Khamassi, Costas S. Tzafestas, and Petros Maragos
Pubblicato in: 2018
Editore: arxiv

Pattern Search Multidimensional Scaling

Autori: Paraskevopoulos, Georgios; Tzinis, Efthymios; Vlatakis-Gkaragkounis, Emmanuel-Vasileios; Potamianos, Alexandros
Pubblicato in: Numero 1, 2018
Editore: Arxiv

Audio-based Distributional Semantic Models for Music Auto-tagging and Similarity Measurement

Autori: G. Karamanolakis, E. Iosif, A. Zlatintsi, A. Pikrakis and A. Potamianos
Pubblicato in: 2016
Editore: arxiv

Active Exploration in Parameterized Reinforcement Learning

Autori: M. Khamassi and C. Tzafestas
Pubblicato in: 2016
Editore: arXiv

È in corso la ricerca di dati su OpenAIRE...

Si è verificato un errore durante la ricerca dei dati su OpenAIRE

Nessun risultato disponibile