Document Type

Article

Publication Date

2004

Abstract

The governor architecture is a new method for avoiding catatrophic forgetting in neural networks that is particularly useful in online robot learn- ing. The governor architecture uses a categorizer to identify events and excise long sequences of repetitive data that cause catastrophic forgetting in neural networks trained on robot-based tasks. We examine the performance of several variations of the governor architecture on a number of re- lated localization tasks using a simulated robot. The results show that governed networks perform far better than ungoverned networks. Governored networks are able to reliably and robustly prevent catastrophic forgetting in robot learning tasks.

Share

COinS