The success of the human species critically depends on our extraordinary ability to engage in joint action. Our perceptions, decisions and behaviour are tuned to those of others with whom we share beliefs, intentions and goals and thus form a group. These insights underlie the motivation of the JAST project to develop jointly-acting autonomous systems that communicate and work intelligently on mutual tasks in dynamic unstructured environments. A goal that is far-reaching beyond studying individual cognitive systems and that will expand the concept of group to human plus artificial agent(s).
By combining a basic, gender-differentiated understanding of the cognition, neurobiology and dialogue strategy of joint action JAST aims:
- To build two autonomous agents, each endowed with a vision system, a gripper and a speech recognition/production system that in cooperative configurations can carry out complex construction tasks,
- To implement perceptual modules for object recognition and recognition of gestures and actions of the partner (human or robot),
- To implement control schemes that generate motor behaviour on the basis of internal forward models for the co-ordinated action of multiple cognitive systems,
- To implement verbal and non-verbal communication structures,
- To develop autonomous systems with goal-directed and self-organizing learning processes, and
- To implement an error monitoring system capable of reacting intelligently to self- and other-generated errors.
Because the JAST consortium combines leading scientists from disciplines that normally do not share even a vocabulary, JAST will initiate a completely new way of thinking about human perception, decision making, and behaviour.
Funding SchemeIP - Integrated Project
EH8 9YL Edinburgh