Leveraging multimodal large language model for multimodal sequential recommendation

Abstract Multimodal large language models (MLLMs) have demonstrated remarkable superiority in various vision-language tasks due to their unparalleled cross-modal comprehension capabilities and extensive world knowledge, offering promising research paradigms to address the insufficient information ex...

Full description

Saved in:
Bibliographic Details
Main Authors: Zhaoliang Wang, Baisong Liu, Weiming Huang, Tingting Hao, Huiqian Zhou, Yuxin Guo
Format: Article
Language:English
Published: Nature Portfolio 2025-08-01
Series:Scientific Reports
Subjects:
Online Access:https://doi.org/10.1038/s41598-025-14251-1
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract Multimodal large language models (MLLMs) have demonstrated remarkable superiority in various vision-language tasks due to their unparalleled cross-modal comprehension capabilities and extensive world knowledge, offering promising research paradigms to address the insufficient information exploitation in conventional multimodal recommendation systems. Despite significant advances in existing recommendation approaches based on large language models, they still exhibit notable limitations in multimodal feature recognition and dynamic preference modeling, particularly in handling sequential data effectively and most of them predominantly rely on unimodal user-item interaction information, failing to adequately explore the cross-modal preference differences and the dynamic evolution of user interests within multimodal interaction sequences. These shortcomings have substantially prevented current research from fully unlocking the potential value of MLLMs within recommendation systems. To address these critical challenges, we present MLLM-SRec, a promising sequential recommendation architecture built upon MLLMs. Specifically, a novel multimodal feature fusion mechanism based on MLLMs is first established to generate unified semantic representations of items, which achieves semantic alignment between vision and text and effectively eliminates cross-modal differences and visual noise. Secondly, the temporal-aware user behavior comprehension module is designed to comprehensively capture the dynamic evolution law of user preference. Finally, by jointly modeling dynamic user preferences, user profiles, and multimodal information of target item, the supervised fine-tuning is combined with multistep Chain-of-Thought prompting optimization to facilitate effective knowledge transfer from the pre-trained multimodal model to the recommendation task, which effectively alleviates the problem of insufficient utilization of multimodal interaction data in generative recommendation. Experimental results demonstrate that our method achieves significant improvements over state-of-the-art baselines on four benchmark datasets and substantially enhances the precision of the recommendation results while exhibiting superior robustness and adaptability in multimodal sequential recommendation scenarios. These findings provide new methodological insights for multimodal sequence recommendation research and validate the potential of MLLMs for sequential recommendation tasks. Our code and data are available at https://github.com/MLLM-SRec.
ISSN:2045-2322