Skip to main content

Recurrent Neural Networks and Related Machines That Learn Algorithms

Periodic Reporting for period 2 - AlgoRNN (Recurrent Neural Networks and Related Machines That Learn Algorithms)

Reporting period: 2019-04-01 to 2020-09-30

Our artificial recurrent neural networks (RNNs) permeate the modern world on billions of devices such as smartphones. They are used for speech recognition, translation, robotics, healthcare, etc. However, in lots of real-world tasks, RNNs do not yet live up to their full potential. Although universal in theory, in practice they fail to learn important types of algorithms. This ERC project will go beyond what's possible with today's best RNNs, by creating general practical program learners through novel RNN-like systems that address some of the biggest open RNN problems and hottest RNN research topics: (1) How can RNNs learn to control (through internal spotlights of attention) separate large short-memory structures such as sub-networks with fast weights, to improve performance on many natural short-term memory-intensive tasks which are currently hard to learn by RNNs, such as answering detailed questions on recently observed videos? (2) How can such RNN-like systems meta-learn entire learning algorithms that outperform the original learning algorithms? (3) How to achieve efficient transfer learning from one RNN-learned set of problem-solving programs to new RNN programs solving new tasks? In other words, how can one RNN-like system actively learn to exploit algorithmic information contained in the programs running on another?
We have made substantial progress on (1), (2), (3). For more details, please see section “Project Achievements”.
In the future, we expect to: (1) Generalize our novel Tensor-Product RNNs with fast weights [1] to self-supervision and meta-learning (learning to learn). One goal is to leverage an association mechanism to perform some level of systematic reasoning. (2) Generalize our novel method for meta-learning general reinforcement learning algorithms (MetaGenRL) [7] to support more advanced learning algorithms and replace human-engineered algorithms. Also develop stable methods to learn policies combining the information collected by many different learning agents. (3) Analyze how to make neural networks learn modular structure that supports compositionality as required for recursive function calls, and investigate what kind of inductive bias is necessary to obtain such compositional solutions. All of this is targeted towards achieving more human-like high-level reasoning.