Leveraging multimodal large language model for multimodal sequential recommendation

Abstract Multimodal large language models (MLLMs) have demonstrated remarkable superiority in various vision-language tasks due to their unparalleled cross-modal comprehension capabilities and extensive world knowledge, offering promising research paradigms to address the insufficient information ex...

Full description

Saved in:
Bibliographic Details
Main Authors: Zhaoliang Wang, Baisong Liu, Weiming Huang, Tingting Hao, Huiqian Zhou, Yuxin Guo
Format: Article
Language:English
Published: Nature Portfolio 2025-08-01
Series:Scientific Reports
Subjects:
Online Access:https://doi.org/10.1038/s41598-025-14251-1
Tags: Add Tag
No Tags, Be the first to tag this record!

Similar Items