Rapid, safe and incremental learning of navigation strategies
A reinforcement connectionist learning architecture is proposed that allows an autonomous robot to acquire efficient navigation strategies in a few trials. Besides rapid learning, the architecture has three further appealing features. First, the robot improves its performance incrementally as it interacts with an initially unknown environment, and it ends up learning to avoid collisions even in those situations in which its sensors cannot detect the obstacles. This is a definite advantage over non-learning reactive robots. Second, since it learns from basic reflexes, the robot is operational from the very beginning and the learning process is safe. Third, the robot exhibits high tolerance to noisy sensory data and good generalisation abilities. These features make this learning robot's architecture well suited to real-world applications. Experimental results, obtained with a real mobile robot in an indoor environment, demonstrate the appropriateness of this approach to real autonomous robot control.
Bibliographic Reference: Article: IEEE Trans. on Systems, Man and Cybernetics
Record Number: 199511141 / Last updated on: 1995-08-23
Original language: en
Available languages: en