Final Report Summary - ROBOTASK (Action words Learning in a Humanoid Robot by Discovering Tool Affordances via Statistical Inference)
The research activities carried out within the RoboTAsk project aimed at:
1) implementing a language model for endowing robots with the capability to ground sentences describing manipulation tasks in affordances. Affordances are all the motor programs (i.e. actions) that an acting organism can perform during the interaction with objects in the environment to achieve desired effects (e.g. push an object to move it of a certain distance and/or along a certain direction).
For fulfilling this objective we foresaw a scenario in which a human partner interacts with the robot (by using speech) on a number of scenarios involving objects and tools. The robot learns to translate the verbal requests of its human partner into affordances.
2) carrying out Human-Robot Interaction (HRI) studies to investigate whether a robot endowed as per 1) participates in a shared task as a plausible interaction partner.
For achieving this objective the robot and the human partner were engaged in an interaction task with a shared goal. The robotic platform was tested with respect to the possibility of intervening by taking action into a sequence of actions performed by the human.
For fulfilling the objective 1) of the project we proposed a model that grounds language into object and tool affordances. The modeling of affordances allows to learn the effect of tapping objects (e.g. ball, toy car, cylinder) along different directions, and pulling out-of-reach objects by using the appropriate tool (e.g. rake, hoe, stick). Hence, the robot can leverage such knowledge for the estimation of the object’s displacement (i.e. effect of actions on objects) given new values of the angle used to perform actions (i.e. tapping/pulling). The approach followed for the grounding of language goes beyond pure symbolic or embodied modeling. Indeed, most recently it has been proposed that concepts are encoded in at least two general types of semantic representations: sensorimotor based and language based. We proposed an embodied statistical language model in which word’s meaning representations are based on the sensorimotor and the linguistic knowledge of a robot. The model provides a “grounding layer” for producing the perceptual symbols used to ground words in sensorimotor knowledge, and a “semantic layer” for reasoning about the perceptual symbols acquired during the grounding layer. To this end, we employed Dynamic and Static Bayesian Networks.
For the achievement of the objective 2) of the RoboTAsk project we proposed an affordances based planner implemented through Hierarchical Task Networks. The proposed planner enables the robot to: (i) derive a high-level manipulation strategy of a joint task that requires the performance of a sequence of actions of both the robot and the human and (ii) decide when to intervene by taking action into the sequence of actions performed by the human. The robot intervention in the course of action is dictated by the anticipation of the needs of the human co-worker; the robot can proactively perform a supportive behavior to help its human partner. For building plans shared between the robot and the human, we exploit the knowledge represented by affordance models. Affordances are leveraged to tailor the plan to the environment where the robot operates, selecting the best action to implement a step of the plan according to the object features and the human preferences. The proposed planner has two important features: (i) reaction to action failure to dynamically adapt the plan during the execution, and (ii) planning of concurrent actions that increases the level of support that the robot can provide, and improves the working conditions of the human.
The primary contributions of the research activities related to the RoboTAsk project are:
• Language model for the acquisition of the meaning of word sequences that enables a humanoid robot to map linguistic commands provided by a human into the appropriate behaviors. The model can handle verbal descriptions that follow different syntactic structures for adapting to speakers that use different word order [2].
• Affordance model for capturing the effects of actions performed on objects, and to semantically ground word sequences. That is, the semantic roles of words are grounded in the affordance knowledge underlying the execution of manipulation tasks. The knowledge encoded in the affordance model is also used for goals understanding; indeed, the model enables to select the actions that permit to obtain desired effects on objects by computing a conditional probability [2],[3].
• Inference engine for reasoning about the acquired perceptual symbols and produce new knowledge. This enables a robot to solve inference queries for filling in missing information in the verbal description of a task provided by a human, and perform the appropriate behavior [1],[2],[3].
• Task planner that enables a robot to assist its human partner while engaged in a task with a shared goal. The robot creates a shared plan of actions that includes both the robots and the humans actions (i.e. coordination of multi-agent to achieve the shared goal). The robot can employ the affordance engine to reason on its action possibilities as well as those of the partner, and offer help depending on this evaluation [4].
The proposed language system has attempted to progress theories of language learning in robots. The advances in the design of human-robot communication systems can lead to a new generation of interactive robots. Robots endowed with the capability to understand language and adapt their behavior according to human requests can have an important impact on the robotics industry in existing and emerging markets.
The studies conducted to investigate the participation of a robot into a shared task with a human can be leveraged to build a new generation of industrial robots conceived for collaborating with human workers in manufacturing tasks that cannot be fully automated (e.g. manipulating objects that are deformable). In some cases, semi-automation is preferable to full automation. Indeed, the combination of industrial robot capabilities (e.g. perform tasks in an accurate, precise and fast way) with human perceptual, motor and cognitive skills can increase efficiency, quality and productivity. Human workers have knowledge about the tasks to perform and they can think of more efficient ways to organize the work. Moreover, the collaboration of human workers with their robotic counterpart allows a flexible organization of the tasks to be executed and opens to the possibility of introducing improvements in the way the tasks are executed.
References
[1] Stramandinoli Francesca, Tikhanoff Vadim, Pattacini Ugo, Nori Francesco. “A Bayesian approach towards affordance learning in artificial agents”. In 2015 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob), pp. 298-299. IEEE, 2015.
[2] Stramandinoli Francesca, Tikhanoff Vadim, Pattacini Ugo, Nori Francesco. “Grounding Speech Utterances in Robotics Affordances: An Embodied Statistical Language Model”. In 2016 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob). IEEE, 2016.
[3] Stramandinoli Francesca, Tikhanoff Vadim, Pattacini Ugo, Nori Francesco. “Heteroscedastic Regression and Active Learning for Modeling Affordances in Humanoids”. In preparation for submission to Transactions on Cognitive and Developmental Systems (TCDS). IEEE, 2017.
[4] Stramandinoli Francesca, Roncone, Alessandro, Mangin Olivier, Nori, Francesco, Scassellati Brian. “An Affordance-based Action Planner for On-line and Concurrent Human-Robot Collaborative Assembly”. Submitted to the 2017 IEEE International Conference on Robotics and Automation, 2017.
PROJECT WEBSITE: http://www.robotask.eu/
PROJECT LOGO: http://www.robotask.eu/style/logoRoboTask.png
PHOTOGRAPHS:
https://www.facebook.com/media/set/?set=a.10209522302875979.1073741837.1278978462&type=1&l=65e5bc0e12
VIDEOS:
https://youtu.be/d1A-kxW1Rsw
https://youtu.be/PWAnIAjeDGk
https://www.facebook.com/francesca.stramandinoli/posts/10210955883994611?pnref=story
https://www.facebook.com/Marie.Curie.Actions/videos/1254206831319514/?pnref=story