Taming the chaos gently: a predictive alignment learning rule in recurrent neural networks

Abstract Recurrent neural circuits often face inherent complexities in learning and generating their desired outputs, especially when they initially exhibit chaotic spontaneous activity. While the celebrated FORCE learning rule can train chaotic recurrent networks to produce coherent patterns by sup...

Full description

Saved in:
Bibliographic Details
Main Authors: Toshitake Asabuki, Claudia Clopath
Format: Article
Language:English
Published: Nature Portfolio 2025-07-01
Series:Nature Communications
Online Access:https://doi.org/10.1038/s41467-025-61309-9
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1849234917128404992
author Toshitake Asabuki
Claudia Clopath
author_facet Toshitake Asabuki
Claudia Clopath
author_sort Toshitake Asabuki
collection DOAJ
description Abstract Recurrent neural circuits often face inherent complexities in learning and generating their desired outputs, especially when they initially exhibit chaotic spontaneous activity. While the celebrated FORCE learning rule can train chaotic recurrent networks to produce coherent patterns by suppressing chaos, it requires non-local plasticity rules and quick plasticity, raising the question of how synapses adapt on local, biologically plausible timescales to handle potential chaotic dynamics. We propose a novel framework called “predictive alignment”, which tames the chaotic recurrent dynamics to generate a variety of patterned activities via a biologically plausible plasticity rule. Unlike most recurrent learning rules, predictive alignment does not aim to directly minimize output error to train recurrent connections, but rather it tries to efficiently suppress chaos by aligning recurrent prediction with chaotic activity. We show that the proposed learning rule can perform supervised learning of multiple target signals, including complex low-dimensional attractors, delay matching tasks that require short-term temporal memory, and finally even dynamic movie clips with high-dimensional pixels. Our findings shed light on how predictions in recurrent circuits can support learning.
format Article
id doaj-art-951d6baf49ee4bcc94a2209b3a5c5891
institution Kabale University
issn 2041-1723
language English
publishDate 2025-07-01
publisher Nature Portfolio
record_format Article
series Nature Communications
spelling doaj-art-951d6baf49ee4bcc94a2209b3a5c58912025-08-20T04:02:56ZengNature PortfolioNature Communications2041-17232025-07-0116111310.1038/s41467-025-61309-9Taming the chaos gently: a predictive alignment learning rule in recurrent neural networksToshitake Asabuki0Claudia Clopath1RIKEN Center for Brain Science, RIKEN ECL Research UnitDepartment of Bioengineering, Imperial College LondonAbstract Recurrent neural circuits often face inherent complexities in learning and generating their desired outputs, especially when they initially exhibit chaotic spontaneous activity. While the celebrated FORCE learning rule can train chaotic recurrent networks to produce coherent patterns by suppressing chaos, it requires non-local plasticity rules and quick plasticity, raising the question of how synapses adapt on local, biologically plausible timescales to handle potential chaotic dynamics. We propose a novel framework called “predictive alignment”, which tames the chaotic recurrent dynamics to generate a variety of patterned activities via a biologically plausible plasticity rule. Unlike most recurrent learning rules, predictive alignment does not aim to directly minimize output error to train recurrent connections, but rather it tries to efficiently suppress chaos by aligning recurrent prediction with chaotic activity. We show that the proposed learning rule can perform supervised learning of multiple target signals, including complex low-dimensional attractors, delay matching tasks that require short-term temporal memory, and finally even dynamic movie clips with high-dimensional pixels. Our findings shed light on how predictions in recurrent circuits can support learning.https://doi.org/10.1038/s41467-025-61309-9
spellingShingle Toshitake Asabuki
Claudia Clopath
Taming the chaos gently: a predictive alignment learning rule in recurrent neural networks
Nature Communications
title Taming the chaos gently: a predictive alignment learning rule in recurrent neural networks
title_full Taming the chaos gently: a predictive alignment learning rule in recurrent neural networks
title_fullStr Taming the chaos gently: a predictive alignment learning rule in recurrent neural networks
title_full_unstemmed Taming the chaos gently: a predictive alignment learning rule in recurrent neural networks
title_short Taming the chaos gently: a predictive alignment learning rule in recurrent neural networks
title_sort taming the chaos gently a predictive alignment learning rule in recurrent neural networks
url https://doi.org/10.1038/s41467-025-61309-9
work_keys_str_mv AT toshitakeasabuki tamingthechaosgentlyapredictivealignmentlearningruleinrecurrentneuralnetworks
AT claudiaclopath tamingthechaosgentlyapredictivealignmentlearningruleinrecurrentneuralnetworks