Skip to main content
Vai all'homepage della Commissione europea (si apre in una nuova finestra)
italiano italiano
CORDIS - Risultati della ricerca dell’UE
CORDIS

Recurrent Neural Networks and Related Machines That Learn Algorithms

CORDIS fornisce collegamenti ai risultati finali pubblici e alle pubblicazioni dei progetti ORIZZONTE.

I link ai risultati e alle pubblicazioni dei progetti del 7° PQ, così come i link ad alcuni tipi di risultati specifici come dataset e software, sono recuperati dinamicamente da .OpenAIRE .

Pubblicazioni

The Neural Data Router: Adaptive Control Flow in Transformers Improves Systematic Generalization

Autori: R Csordás, K Irie, J Schmidhuber
Pubblicato in: ICLR 2022, 2022
Editore: ICLR 2022

​Linear Transformers Are Secretly Fast Weight Programmers

Autori: ​I. Schlag*, K. Irie*, J. Schmidhuber
Pubblicato in: ICML 2021, Proceedings of the 38th International Conference on Machine Learning, 2021
Editore: ICML 2021

Parameter-based value functions

Autori: F. Faccio, L. Kirsch, J. Schmidhuber
Pubblicato in: NeurIPS 2020 Workshop on Offline Reinforcement Learning, 2020
Editore: NeurIPS 2020 Workshop on Offline Reinforcement Learning

Exploring the Promise and Limits of Real-Time Recurrent Learning (si apre in una nuova finestra)

Autori: Irie, Kazuki; Gopalakrishnan, Anand; Schmidhuber, Jürgen
Pubblicato in: ICLR 2024, 2024
Editore: ICLR 2024
DOI: 10.48550/arxiv.2305.19044

An Investigation into the Open World Survival Game Crafter

Autori: A. Stanic, Y. Tang, D. Ha, J. Schmidhuber
Pubblicato in: ICML 2022, 2022
Editore: ICML 2022

​Going Beyond Linear Transformers with Recurrent Fast Weight Programmers

Autori: ​K. Irie*, I. Schlag*, R. Csordás, J. Schmidhuber
Pubblicato in: NeurIPS 2021, 2021
Editore: NeurIPS 2021

Images as Weight Matrices: Sequential Image Generation Through Synaptic Learning Rules (si apre in una nuova finestra)

Autori: Irie, Kazuki; Schmidhuber, Jürgen
Pubblicato in: ICLR 2023, 2023
Editore: ICLR 2023
DOI: 10.48550/arxiv.2210.06184

The Benefits of Model-Based Generalization in Reinforcement Learning (si apre in una nuova finestra)

Autori: K. Young, A. Ramesh, L. Kirsch, J. Schmidhuber
Pubblicato in: ICML 2023, 2023
Editore: ICML 2023
DOI: 10.48550/arxiv.2211.02222

General Policy Evaluation and Improvement by Learning to Identify Few But Crucial States

Autori: F. Faccio, A. Ramesh, V. Herrmann, J. Harb, J. Schmidhuber
Pubblicato in: ICML 2022 Workshop on Decision Awareness in Reinforcement Learning, 2022
Editore: ICML 2022 Workshop on Decision Awareness in Reinforcement Learning

Learning to identify critical states for reinforcement learning from videos

Autori: H. Liu, M. Zhuge, B. Li, Y. Wang, F. Faccio, B. Ghanem, J. Schmidhuber
Pubblicato in: ICCV 2023, 2023
Editore: ICCV 2023

Goal-Conditioned Generators of Deep Policies

Autori: F. Faccio*, V. Herrmann*, A. Ramesh, L. Kirsch, J. Schmidhuber
Pubblicato in: RLDM 2022, 2022
Editore: RLDM 2022

Topological Neural Discrete Representation Learning à la Kohonen (si apre in una nuova finestra)

Autori: Irie, Kazuki; Csordás, Róbert; Schmidhuber, Jürgen
Pubblicato in: ICML 2023 Workshop on Sampling and Optimization in Discrete Space, 2023
Editore: ICML 2023 Workshop on Sampling and Optimization in Discrete Space
DOI: 10.48550/arxiv.2302.07950

General Policy Evaluation and Improvement by Learning to Identify Few But Crucial States

Autori: F. Faccio, A. Ramesh, V. Herrmann, J. Harb, J. Schmidhuber
Pubblicato in: RLDM 2022, 2022
Editore: RLDM 2022

Continually Adapting Optimizers Improve Meta-Generalization

Autori: W. Wang, L. Kirsch, F. Faccio, M. Zhuge, J. Schmidhuber
Pubblicato in: NeurIPS 2023 Workshop on Optimization for Machine Learning, 2023
Editore: NeurIPS 2023 Workshop on Optimization for Machine Learning

Towards general-purpose in-context learning agents

Autori: L. Kirsch, J. Harrison, D. Freeman, J. Sohl-Dickstein, J. Schmidhuber
Pubblicato in: Foundation Models for Decision Making Workshop at NeurIPS, 2023
Editore: Foundation Models for Decision Making Workshop at NeurIPS

Neural Differential Equations for Learning to Program Neural Nets Through Continuous Learning Rules.

Autori: K. Irie, F. Faccio, J. Schmidhuber
Pubblicato in: NeurIPS 2022, 2022
Editore: NeurIPS 2022

Goal-Conditioned Generators of Deep Policies

Autori: F. Faccio*, V. Herrmann*, A. Ramesh, L. Kirsch, J. Schmidhuber
Pubblicato in: AAAI 2023, 2023
Editore: AAAI 2023

Goal-Conditioned Generators of Deep Policies

Autori: F. Faccio*, V. Herrmann*, A. Ramesh, L. Kirsch, J. Schmidhuber
Pubblicato in: ICML 2022 Workshop on Dynamic Neural Networks, 2022
Editore: ICML 2022 Workshop on Dynamic Neural Networks

​Training and Generating Neural Networks in Compressed Weight Space

Autori: ​K. Irie, J. Schmidhuber
Pubblicato in: ​ICLR 2021 Workshop on Neural Compression, 2021
Editore: ​ICLR 2021 Workshop on Neural Compression

The Dual Form of Neural Networks Revisited: Connecting Test Time Predictions to Training Patterns via Spotlights of Attention (si apre in una nuova finestra)

Autori: Irie, Kazuki; Csordás, Róbert; Schmidhuber, Jürgen
Pubblicato in: ICML 2022, 2022
Editore: ICML 2022
DOI: 10.48550/arxiv.2202.05798

Learning useful representations of recurrent neural network weight matrices (si apre in una nuova finestra)

Autori: V. Herrmann, F. Faccio, J. Schmidhuber
Pubblicato in: NeurIPS 2023 Workshop on Self-Supervised Learning - Theory and Practice, 2023
Editore: NeurIPS 2023 Workshop on Self-Supervised Learning - Theory and Practice
DOI: 10.48550/arxiv.2403.11998

Approximating Two-Layer Feedforward Networks for Efficient Transformers

Autori: R. Csordás, ​K. Irie, J. Schmidhuber
Pubblicato in: EMNLP-Findings 2023, 2023
Editore: EMNLP-Findings 2023

Accelerating Neural Self-Improvement via Bootstrapping (si apre in una nuova finestra)

Autori: Irie, Kazuki; Schmidhuber, Jürgen
Pubblicato in: ICLR 2023 Workshop on Mathematical and Empirical Understanding of Foundation Models, 2023
Editore: ICLR 2023 Workshop on Mathematical and Empirical Understanding of Foundation Models
DOI: 10.48550/arxiv.2305.01547

On Narrative Information and the Distillation of Stories (si apre in una nuova finestra)

Autori: Ashley, Dylan R.; Herrmann, Vincent; Friggstad, Zachary; Schmidhuber, Jürgen
Pubblicato in: NeurIPS 2022 InfoCog Workshop, 2022
Editore: NeurIPS 2022 InfoCog Workshop
DOI: 10.48550/arxiv.2211.12423

​A Modern Self-Referential Weight Matrix That Learns to Modify Itself

Autori: ​K. Irie, I. Schlag, R. Csordás, J. Schmidhuber
Pubblicato in: ​​NeurIPS 2021 Workshop on Deep Reinforcement Learning, 2021
Editore: ​​NeurIPS 2021 Workshop on Deep Reinforcement Learning

Unsupervised Musical Object Discovery from Audio

Autori: J. Gha, V. Herrmann, B. Grewe, J. Schmidhuber, A. Gopalakrishnan
Pubblicato in: NeurIPS 2023 Workshop on Machine Learning for Audio, 2023
Editore: NeurIPS 2023 Workshop on Machine Learning for Audio

Reward-Weighted Regression Converges to a Global Optimum

Autori: M. Štrupl*, F. Faccio*, D. R. Ashley, R. K. Srivastava, J. Schmidhuber
Pubblicato in: AAAI 2022, in press, 2022
Editore: AAAI 2022

Learning Associative Inference Using Fast Weight Memory

Autori: I. Schlag, T. Munkhdalai, J. Schmidhuber
Pubblicato in: ICLR 2021, 2021
Editore: ICLR 2021

Practical Computational Power of Linear Transformers and Their Recurrent and Self-Referential Extensions (si apre in una nuova finestra)

Autori: ​K. Irie, R. Csordás, J. Schmidhuber
Pubblicato in: EMNLP 2023, 2023
Editore: EMNLP 2023
DOI: 10.48550/arxiv.2310.16076

Block-Recurrent Transformers

Autori: D. Hutchins, I. Schlag, Y. Wu, E. Dyer, B. Neyshabur
Pubblicato in: NeurIPS 2022, 2022
Editore: NeurIPS 2022

The Devil is in the Detail: Simple Tricks Improve Systematic Generalization of Transformers

Autori: R Csordás, K Irie, J Schmidhuber
Pubblicato in: EMNLP 2021, Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 2021, Pagina/e 619–634
Editore: EMNLP 2021

Bayesian Brains and the Rényi Divergence

Autori: N. Sajid*, F. Faccio*, L. Da Costa, T. Parr, J. Schmidhuber, K. Friston
Pubblicato in: CNS*2021, 2021
Editore: CNS*2021

Sequence Compression Speeds Up Credit Assignment in Reinforcement Learning

Autori: A. Ramesh, K. Young, L. Kirsch, J. Schmidhuber
Pubblicato in: ICLR 2024 Generative Models for Decision Making, 2024
Editore: ICLR 2024 Generative Models for Decision Making

Goal-Conditioned Generators of Deep Policies

Autori: F. Faccio*, V. Herrmann*, A. Ramesh, L. Kirsch, J. Schmidhuber
Pubblicato in: EWRL 2022, 2022
Editore: EWRL 2022

Improving Stateful Premise Selection with Transformers

Autori: K. Prorokovic, M. Wand, J. Schmidhuber
Pubblicato in: CICM 2021, 2021
Editore: CICM 2021

​Improving Baselines in the Wild

Autori: ​K. Irie, I. Schlag, R. Csordás, J. Schmidhuber
Pubblicato in: ​NeurIPS 2021 Workshop on Distribution Shifts, 2021
Editore: ​NeurIPS 2021 Workshop on Distribution Shifts

Unsupervised Object Keypoint Learning using Local Spatial Predictability

Autori: A. Gopalakrishnan, S. van Steenkiste, J. Schmidhuber
Pubblicato in: ICML 2020 workshop on Object-Oriented Learning: Perception, Representation and Reasoning, 2020
Editore: ICML

Parameter-based value functions

Autori: F. Faccio, L. Kirsch, J. Schmidhuber
Pubblicato in: ICLR 2021, 2021
Editore: ICLR 2021

Self-referential meta learning

Autori: L. Kirsch, J. Schmidhuber
Pubblicato in: Workshop on Decision Awareness in Reinforcement Learning ICML 2022, 2022
Editore: Workshop on Decision Awareness in Reinforcement Learning ICML 2022

Mindstorms in Natural Language-Based Societies of Mind (si apre in una nuova finestra)

Autori: Zhuge, Mingchen; Liu, Haozhe; Faccio, Francesco; Ashley, Dylan R.; Csordás, Róbert; Gopalakrishnan, Anand; Hamdi, Abdullah; Hammoud, Hasan Abed Al Kader; Herrmann, Vincent; Irie, Kazuki; Kirsch, Louis; Li, Bing; Li, Guohao; Liu, Shuming; Mai, Jinjie; Piękos, Piotr; Ramesh, Aditya; Schlag, Imanol; Shi, Weimin; Stanić, Aleksandar; Wang, Wenyi; Wang, Yuhui; Xu, Mengmeng; Fan, Deng-Ping; Ghanem, B
Pubblicato in: NeurIPS 2023 Workshop on Robustness of Few-shot and Zero-shot Learning in Foundation Models, 2023
Editore: NeurIPS 2023 Workshop on Robustness of Few-shot and Zero-shot Learning in Foundation Models
DOI: 10.48550/arxiv.2305.17066

On the Distillation of Stories for Transferring Narrative Arcs in Collections of Independent Media

Autori: D. R. Ashley*, V. Herrmann*, Z. Friggstad, J. Schmidhuber
Pubblicato in: NeurIPS 2023 Workshop on ML for Creativity and Design, 2023
Editore: NeurIPS 2023 Workshop on ML for Creativity and Design

Reward-Weighted Regression Converges to a Global Optimum

Autori: M. Štrupl*, F. Faccio*, D. R. Ashley, R. K. Srivastava, J. Schmidhuber
Pubblicato in: ICML 2021 Workshop on Reinforcement Learning Theory, 2021
Editore: ICML 2021 Workshop on Reinforcement Learning Theory

A Modern Self-Referential Weight Matrix That Learns to Modify Itself (si apre in una nuova finestra)

Autori: Irie, Kazuki; Schlag, Imanol; Csordás, Róbert; Schmidhuber, Jürgen
Pubblicato in: ICML 2022, 2022
Editore: ICML 2022
DOI: 10.48550/arxiv.2202.05780

Policy Optimization via Importance Sampling

Autori: A. M. Metelli, M. Papini, F. Faccio, M. Restelli
Pubblicato in: NeurIPS 2018, 2018
Editore: published at the Neural Information Processing Systems Conference

Improving Differentiable Neural Computers Through Memory Masking, De-allocation, and Link Distribution Sharpness Control

Autori: R. Csordas, J. Schmidhuber
Pubblicato in: ICLR 2019, 2019
Editore: Published as a conference paper at ICLR 2019

Modular Networks: Learning to Decompose Neural Computation

Autori: L. Kirsch, J. Kunze, D. Barber
Pubblicato in: NeurIPS 2018, 2018
Editore: published at the Neural Information Processing Systems Conference

Learning to Reason with Third Order Tensor Products

Autori: I. Schlag, J. Schmidhuber
Pubblicato in: NeurIPS 2018, 2018
Editore: published at the Neural Information Processing Systems Conference

Recurrent World Models Facilitate Policy Evolution

Autori: D. Ha, J. Schmidhuber
Pubblicato in: NeurIPS 2018, 2018
Editore: published at the Neural Information Processing Systems Conference

Enhancing the Transformer with Explicit Relational Encoding for Math Problem Solving

Autori: I. Schlag, P. Smolensky, R. Fernandez, N. Jojic, J. Schmidhuber, J. Gao
Pubblicato in: Under review, 2020
Editore: Not disclosed yet

Improving Generalization in Meta Reinforcement Learning using Neural Objectives

Autori: L. Kirsch, S. van Steenkiste, J. Schmidhuber
Pubblicato in: ICLR 2020, 2020
Editore: Published as a conference paper at ICLR 2020

Learning Adaptive Control Flow in Transformers for Improved Systematic Generalization

Autori: R. Csordás, K. Irie, J. Schmidhuber
Pubblicato in: NeurIPS 2021 Workshop on Advances in Programming Languages and Neurosymbolic Systems, 2021
Editore: NeurIPS 2021 Workshop on Advances in Programming Languages and Neurosymbolic Systems

Learning one abstract bit at a time through self-invented experiments encoded as neural networks

Autori: V. Herrmann, L. Kirsch, J. Schmidhuber
Pubblicato in: International Workshop on Active Inference 2023, 2023
Editore: International Workshop on Active Inference 2023

Augmenting Classic Algorithms with Neural Components for Strong Generalisation on Ambiguous and High-Dimensional Data

Autori: I. Schlag, J. Schmidhuber
Pubblicato in: NeurIPS 2021 Workshop on Advances in Programming Languages and Neurosymbolic Systems, 2021
Editore: NeurIPS 2021 Workshop on Advances in Programming Languages and Neurosymbolic Systems

Learning useful representations of recurrent neural network weight matrices

Autori: V. Herrmann, F. Faccio, J. Schmidhuber
Pubblicato in: NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations, 2023
Editore: NeurIPS 2023 Workshop on Symmetry and Geometry in Neural Representations

CTL++: Evaluating Generalization on Never-Seen Compositional Patterns of Known Functions, and Compatibility of Neural Representations (si apre in una nuova finestra)

Autori: Csordás, Róbert; Irie, Kazuki; Schmidhuber, Jürgen
Pubblicato in: EMNLP 2022, 2022
Editore: EMNLP 2022
DOI: 10.48550/arxiv.2210.06350

Self-referential meta learning

Autori: L. Kirsch, J. Schmidhuber
Pubblicato in: First Conference on Automated Machine Learning (Late-Breaking Workshop) 2022, 2022
Editore: First Conference on Automated Machine Learning (Late-Breaking Workshop) 2022

Exploring through Random Curiosity with General Value Functions

Autori: A. Ramesh, L. Kirsch, S. van Steenkiste, J. Schmidhuber
Pubblicato in: NeurIPS 2022, 2022
Editore: NeurIPS 2022

Parameter-based value functions

Autori: F. Faccio, L. Kirsch, J. Schmidhuber
Pubblicato in: NeurIPS 2020 Workshop on Deep Reinforcement Learning, 2021
Editore: NeurIPS 2020 Workshop on Deep Reinforcement Learning

General Policy Evaluation and Improvement by Learning to Identify Few But Crucial States

Autori: F. Faccio, A. Ramesh, V. Herrmann, J. Harb, J. Schmidhuber
Pubblicato in: EWRL 2022, 2022
Editore: EWRL 2022

Unsupervised Object Keypoint Learning using Local Spatial Predictability

Autori: A. Gopalakrishnan, S. van Steenkiste, J. Schmidhuber
Pubblicato in: ICLR 2021, 2021
Editore: ICLR 2021

Are Neural Nets Modular? Inspecting Functional Modularity Through Differentiable Weight Masks

Autori: R Csordás, S van Steenkiste, J Schmidhuber
Pubblicato in: ICLR 2021, 2021
Editore: ICLR 2021

Contrastive Training of Complex-Valued Autoencoders for Object Discovery

Autori: A. Stanić*, A. Gopalakrishnan*, K. Irie, J. Schmidhuber
Pubblicato in: NeurIPS 2023, 2023
Editore: NeurIPS 2023

Exploring through Random Curiosity with General Value Functions

Autori: A. Ramesh, L. Kirsch, S. van Steenkiste, J. Schmidhuber
Pubblicato in: ​​NeurIPS 2021 Workshop on Deep Reinforcement Learning, 2021
Editore: ​​NeurIPS 2021 Workshop on Deep Reinforcement Learning

Meta learning backpropagation and improving it

Autori: L. Kirsch, J. Schmidhuber
Pubblicato in: NeurIPS 2021, 2021
Editore: NeurIPS 2021

Continually Adapting Optimizers Improve Meta-Generalization

Autori: W. Wang, L. Kirsch, F. Faccio, M. Zhuge, J. Schmidhuber
Pubblicato in: NeurIPS 2023 Workshop on Distribution Shifts, 2023
Editore: NeurIPS 2023 Workshop on Distribution Shifts

Learning to Control Rapidly Changing Synaptic Connections: An Alternative Type of Memory in Sequence Processing Artificial Neural Networks (si apre in una nuova finestra)

Autori: Irie, Kazuki; Schmidhuber, Jürgen
Pubblicato in: NeurIPS 2022 Workshop on Memory in Artificial and Real Intelligence (MemARI), 2022
Editore: NeurIPS 2022 Workshop on Memory in Artificial and Real Intelligence (MemARI)
DOI: 10.48550/arxiv.2211.09440

Goal-Conditioned Generators of Deep Policies

Autori: F. Faccio*, V. Herrmann*, A. Ramesh, L. Kirsch, J. Schmidhuber
Pubblicato in: ICML 2022 Workshop on Decision Awareness in Reinforcement Learning, 2022
Editore: ICML 2022 Workshop on Decision Awareness in Reinforcement Learning

The Languini Kitchen: Enabling Language Modelling Research at Different Scales of Compute (si apre in una nuova finestra)

Autori: Stanić, Aleksandar; Ashley, Dylan; Serikov, Oleg; Kirsch, Louis; Faccio, Francesco; Schmidhuber, Jürgen; Hofmann, Thomas; Schlag, Imanol
Pubblicato in: Not disclosed yet, 2023
Editore: Not disclosed yet
DOI: 10.48550/arxiv.2309.11197

SwitchHead: Accelerating Transformers with Mixture-of-Experts Attentio

Autori: R. Csordás, P. Piękos, K. Irie, J. Schmidhuber.
Pubblicato in: Not disclosed yet, 2023
Editore: Not disclosed yet

Automatic Embedding of Stories Into Collections of Independent Media (si apre in una nuova finestra)

Autori: Ashley, Dylan R.; Herrmann, Vincent; Friggstad, Zachary; Mathewson, Kory W.; Schmidhuber, Jürgen
Pubblicato in: Not disclosed yet, 2021
Editore: Not disclosed yet
DOI: 10.48550/arxiv.2111.02216

Automating Continual Learning (si apre in una nuova finestra)

Autori: Irie, Kazuki; Csordás, Róbert; Schmidhuber, Jürgen
Pubblicato in: Not disclosed yet, 2023
Editore: Not disclosed yet
DOI: 10.48550/arxiv.2312.00276

On the Study of Catastrophic Forgetting in Artificial Neural Networks and the Choice of Optimizer

Autori: D. R. Ashley, S. Ghiassian, R. S. Sutton
Pubblicato in: Not disclosed yet, 2021
Editore: Not disclosed yet

All You Need Is Supervised Learning: From Imitation Learning to Meta-RL With Upside Down RL

Autori: K. Arulkumaran, D. R. Ashley, J. Schmidhuber, R. K. Srivastava
Pubblicato in: RLDM 2022, 2022
Editore: RLDM 2022

Learning Relative Return Policies With Upside-Down Reinforcement Learning

Autori: D. R. Ashley, K. Arulkumaran, J. Schmidhuber, R. K. Srivastava
Pubblicato in: RLDM 2022, 2022
Editore: RLDM 2022

Upside-Down Reinforcement Learning Can Diverge in Stochastic Environments With Episodic Resets (si apre in una nuova finestra)

Autori: Štrupl, Miroslav; Faccio, Francesco; Ashley, Dylan R.; Schmidhuber, Jürgen; Srivastava, Rupesh Kumar
Pubblicato in: RLDM 2022, 2022
Editore: RLDM 2022
DOI: 10.48550/arxiv.2205.06595

Bayesian Brains and the Rényi Divergence

Autori: N. Sajid*, F. Faccio*, L. Da Costa, T. Parr, J. Schmidhuber, K. Friston
Pubblicato in: Neural Computation, Numero 34, 829–855 (2022), 2021, ISSN 1530-888X
Editore: Neural computation, Cambridge: MIT Press

Learning to Generalize with Object-centric Agents in the Open World Survival Game Crafter (si apre in una nuova finestra)

Autori: A. Stanic, Y. Tang, D. Ha, J. Schmidhuber
Pubblicato in: IEEE Transactions on Games, 2023, ISSN 2475-1510
Editore: IEEE
DOI: 10.1109/tg.2023.3276849

Recurrent Neural-Linear Posterior Sampling for Nonstationary Contextual Bandits

Autori: A. Ramesh, P. Rauber, M. Conserva, J.Schmidhuber
Pubblicato in: Neural Computation, 2023, ISSN 0899-7667
Editore: MIT Press

Generative Adversarial Networks are Special Cases of Artificial Curiosity (1990) and also Closely Related to Predictability Minimization (1991)

Autori: J. Schmidhuber
Pubblicato in: Neural Networks, 2020, ISSN 0893-6080
Editore: Pergamon Press Ltd.

È in corso la ricerca di dati su OpenAIRE...

Si è verificato un errore durante la ricerca dei dati su OpenAIRE

Nessun risultato disponibile

Il mio fascicolo 0 0