Incremental accumulation of linguistic context in artificial and biological neural networks
Abstract Large Language Models (LLMs) have shown success in predicting neural signals associated with narrative processing, but their approach to integrating context over large timescales differs fundamentally from that of the human brain. In this study, we show how the brain, unlike LLMs that proce...
Saved in:
Main Authors: | Refael Tikochinski, Ariel Goldstein, Yoav Meiri, Uri Hasson, Roi Reichart |
---|---|
Format: | Article |
Language: | English |
Published: |
Nature Portfolio
2025-01-01
|
Series: | Nature Communications |
Online Access: | https://doi.org/10.1038/s41467-025-56162-9 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Estimation of Maximum Daily Fresh Snow Accumulation Using an Artificial Neural Network Model
by: Gun Lee, et al.
Published: (2019-01-01) -
Learning Automata Based Incremental Learning Method for Deep Neural Networks
by: Haonan Guo, et al.
Published: (2019-01-01) -
Compensating Sparse-view Inline Computed Tomography Artifacts with Neural Representation and Incremental Forward-Backward Network Architecture
by: Manuel Buchfink, et al.
Published: (2025-02-01) -
Addressing Distribution Discrepancies in Pulsar Candidate Identification via Bayesian-neural-network-based Multimodal Incremental Learning
by: Yi Liu, et al.
Published: (2025-01-01) -
An Incremental Regularization Kernel Randomized Neural Network for Electrical Energy Output Prediction in Combined Cycle Power Plant
by: Xinlei Li, et al.
Published: (2024-01-01)