Minimally disturbing learning
Methods for avoiding catastrophic forgetfulness in feedforward neural networks, without sacrificing the benefits of distributed representations, are investigated. The problem is formalised as the minimisation of the error over the previously learned input-output patterns, subject to the constraints of perfect encoding of the new pattern. This constrained optimisation problem is then transformed into an unconstrained one. The new formulation naturally leads to an algorithm for solving the problem, which is called Minimally Disturbing Learning (MDL). Some experimental comparisons of the performance of MDL with back-propagation are provided which, besides showing the advantages of using MDL, reveal the dependence of forgetfulness on the learning rate in back-propagation.
Bibliographic Reference: Paper presented: International Workshop on Artificial Neural Networks, Granada (ES); Sept. 17-19, 1991
Availability: Available from (1) as Paper EN 36425 ORA
Record Number: 199111465 / Last updated on: 1994-12-02
Original language: en
Available languages: en