Learning of sensor-based arm motions while executing high-level descriptions of tasks
Our work focuses on making an autonomous robot manipulator learn suitable collision-free motions from local sensory data while executing high-level descriptions of tasks. The robot arm must reach a sequence of targets where it undertakes some manipulation. The robot manipulator has a sonar sensor skin covering it links to percieve the obstacles in its surroundings. We use reinforcement learning for that purpose, and the neural controller aquires appropriate reaction strategies in acceptable time provided it has some prior knowledge. This knowledge is specified in two main ways: an appropriate codification of the signals of the neural controller-inputs, outputs and reinforcement-, and decomposition of the learning task. The codification facilitates the generalization capabilities of the network as it takes advantage of inherent symmetries and is quite goal-independent. On the other hand, the task of reaching a certain goal position is decomposed into two sequential subtasks: negotiate obstacles and move to goal. Experimental results show that the controller achieves a good performance incrementally in a reasonable time and exhibits high tolerance to failing sensors.
Bibliographic Reference: Article: Autonomous Robots, 6 (1999)
Record Number: 199910706 / Last updated on: 1999-05-14
Original language: en
Available languages: en