Leveraging multimodal large language model for multimodal sequential recommendation
Abstract Multimodal large language models (MLLMs) have demonstrated remarkable superiority in various vision-language tasks due to their unparalleled cross-modal comprehension capabilities and extensive world knowledge, offering promising research paradigms to address the insufficient information ex...
Saved in:
| Main Authors: | Zhaoliang Wang, Baisong Liu, Weiming Huang, Tingting Hao, Huiqian Zhou, Yuxin Guo |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Nature Portfolio
2025-08-01
|
| Series: | Scientific Reports |
| Subjects: | |
| Online Access: | https://doi.org/10.1038/s41598-025-14251-1 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
DALLRec: an effective data augmentation framework with fine-tuning large language model for recommendation
by: Hongzan Mao, et al.
Published: (2025-08-01) -
Emotion and sentiment enriched decision transformer for personalized recommendations
by: Sana Abakarim, et al.
Published: (2025-07-01) -
Cross-Domain Recommendation With Personalized Rating Pattern Compatibility as Transfer Rate Between Domains
by: Natthapol Maneechote, et al.
Published: (2025-01-01) -
Use of the ESP block as a component of blended anesthesia in abdominal hysterectomy surgeries
by: А.В. Рижковський
Published: (2024-12-01) -
Learning to represent causality in recommender systems driven by large language models (LLMs)
by: Serge Stéphane Aman, et al.
Published: (2025-08-01)