Memory consolidation from a reinforcement learning perspective
Memory consolidation refers to the process of converting temporary memories into long-lasting ones. It is widely accepted that new experiences are initially stored in the hippocampus as rapid associative memories, which then undergo a consolidation process to establish more permanent traces in other...
Saved in:
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
Frontiers Media S.A.
2025-01-01
|
Series: | Frontiers in Computational Neuroscience |
Subjects: | |
Online Access: | https://www.frontiersin.org/articles/10.3389/fncom.2024.1538741/full |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1841555083584602112 |
---|---|
author | Jong Won Lee Min Whan Jung Min Whan Jung |
author_facet | Jong Won Lee Min Whan Jung Min Whan Jung |
author_sort | Jong Won Lee |
collection | DOAJ |
description | Memory consolidation refers to the process of converting temporary memories into long-lasting ones. It is widely accepted that new experiences are initially stored in the hippocampus as rapid associative memories, which then undergo a consolidation process to establish more permanent traces in other regions of the brain. Over the past two decades, studies in humans and animals have demonstrated that the hippocampus is crucial not only for memory but also for imagination and future planning, with the CA3 region playing a pivotal role in generating novel activity patterns. Additionally, a growing body of evidence indicates the involvement of the hippocampus, especially the CA1 region, in valuation processes. Based on these findings, we propose that the CA3 region of the hippocampus generates diverse activity patterns, while the CA1 region evaluates and reinforces those patterns most likely to maximize rewards. This framework closely parallels Dyna, a reinforcement learning algorithm introduced by Sutton in 1991. In Dyna, an agent performs offline simulations to supplement trial-and-error value learning, greatly accelerating the learning process. We suggest that memory consolidation might be viewed as a process of deriving optimal strategies based on simulations derived from limited experiences, rather than merely strengthening incidental memories. From this perspective, memory consolidation functions as a form of offline reinforcement learning, aimed at enhancing adaptive decision-making. |
format | Article |
id | doaj-art-538047fba5074e0880c5dc7fb47c0138 |
institution | Kabale University |
issn | 1662-5188 |
language | English |
publishDate | 2025-01-01 |
publisher | Frontiers Media S.A. |
record_format | Article |
series | Frontiers in Computational Neuroscience |
spelling | doaj-art-538047fba5074e0880c5dc7fb47c01382025-01-08T06:12:24ZengFrontiers Media S.A.Frontiers in Computational Neuroscience1662-51882025-01-011810.3389/fncom.2024.15387411538741Memory consolidation from a reinforcement learning perspectiveJong Won Lee0Min Whan Jung1Min Whan Jung2Center for Synaptic Brain Dysfunctions, Institute for Basic Science, Daejeon, Republic of KoreaCenter for Synaptic Brain Dysfunctions, Institute for Basic Science, Daejeon, Republic of KoreaDepartment of Biological Sciences, Korea Advanced Institute of Science and Technology, Daejeon, Republic of KoreaMemory consolidation refers to the process of converting temporary memories into long-lasting ones. It is widely accepted that new experiences are initially stored in the hippocampus as rapid associative memories, which then undergo a consolidation process to establish more permanent traces in other regions of the brain. Over the past two decades, studies in humans and animals have demonstrated that the hippocampus is crucial not only for memory but also for imagination and future planning, with the CA3 region playing a pivotal role in generating novel activity patterns. Additionally, a growing body of evidence indicates the involvement of the hippocampus, especially the CA1 region, in valuation processes. Based on these findings, we propose that the CA3 region of the hippocampus generates diverse activity patterns, while the CA1 region evaluates and reinforces those patterns most likely to maximize rewards. This framework closely parallels Dyna, a reinforcement learning algorithm introduced by Sutton in 1991. In Dyna, an agent performs offline simulations to supplement trial-and-error value learning, greatly accelerating the learning process. We suggest that memory consolidation might be viewed as a process of deriving optimal strategies based on simulations derived from limited experiences, rather than merely strengthening incidental memories. From this perspective, memory consolidation functions as a form of offline reinforcement learning, aimed at enhancing adaptive decision-making.https://www.frontiersin.org/articles/10.3389/fncom.2024.1538741/fullsimulation-selection modeloffline learningvaluedynaimaginationCA3 |
spellingShingle | Jong Won Lee Min Whan Jung Min Whan Jung Memory consolidation from a reinforcement learning perspective Frontiers in Computational Neuroscience simulation-selection model offline learning value dyna imagination CA3 |
title | Memory consolidation from a reinforcement learning perspective |
title_full | Memory consolidation from a reinforcement learning perspective |
title_fullStr | Memory consolidation from a reinforcement learning perspective |
title_full_unstemmed | Memory consolidation from a reinforcement learning perspective |
title_short | Memory consolidation from a reinforcement learning perspective |
title_sort | memory consolidation from a reinforcement learning perspective |
topic | simulation-selection model offline learning value dyna imagination CA3 |
url | https://www.frontiersin.org/articles/10.3389/fncom.2024.1538741/full |
work_keys_str_mv | AT jongwonlee memoryconsolidationfromareinforcementlearningperspective AT minwhanjung memoryconsolidationfromareinforcementlearningperspective AT minwhanjung memoryconsolidationfromareinforcementlearningperspective |