Cross-modal gated feature enhancement for multimodal emotion recognition in conversations
Abstract Emotion recognition in conversations (ERC), which involves identifying the emotional state of each utterance within a dialogue, plays a vital role in developing empathetic artificial intelligence systems. In practical applications, such as video-based recruitment interviews, customer servic...
Saved in:
| Main Authors: | Shiyun Zhao, Jinchang Ren, Xiaojuan Zhou |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Nature Portfolio
2025-08-01
|
| Series: | Scientific Reports |
| Subjects: | |
| Online Access: | https://doi.org/10.1038/s41598-025-11989-6 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Modality-Guided Refinement Learning for Multimodal Emotion Recognition
by: Sunyoung Cho
Published: (2025-01-01) -
A Benchmark Dataset and a Framework for Urdu Multimodal Named Entity Recognition
by: Hussain Ahmad, et al.
Published: (2025-01-01) -
Graph convolutional network model with a feature compensation module and dual-channel second-order pooling module for multimodal emotion recognition in conversation
by: Xiaocong Tan, et al.
Published: (2025-07-01) -
Research on Emotion Classification Based on Multi-modal Fusion
by: zhihua Xiang, et al.
Published: (2024-02-01) -
Attention-Based Multi-Learning Approach for Speech Emotion Recognition With Dilated Convolution
by: Samuel, Kakuba, et al.
Published: (2023)