Domain adaptation spatial feature perception neural network for cross-subject EEG emotion recognition

Emotion recognition is a critical research topic within affective computing, with potential applications across various domains. Currently, EEG-based emotion recognition, utilizing deep learning frameworks, has been effectively applied and achieved commendable performance. However, existing deep lea...

Full description

Saved in:
Bibliographic Details
Main Authors: Wei Lu, Xiaobo Zhang, Lingnan Xia, Hua Ma, Tien-Ping Tan
Format: Article
Language:English
Published: Frontiers Media S.A. 2024-12-01
Series:Frontiers in Human Neuroscience
Subjects:
Online Access:https://www.frontiersin.org/articles/10.3389/fnhum.2024.1471634/full
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Emotion recognition is a critical research topic within affective computing, with potential applications across various domains. Currently, EEG-based emotion recognition, utilizing deep learning frameworks, has been effectively applied and achieved commendable performance. However, existing deep learning-based models face challenges in capturing both the spatial activity features and spatial topology features of EEG signals simultaneously. To address this challenge, a domain-adaptation spatial-feature perception-network has been proposed for cross-subject EEG emotion recognition tasks, named DSP-EmotionNet. Firstly, a spatial activity topological feature extractor module has been designed to capture spatial activity features and spatial topology features of EEG signals, named SATFEM. Then, using SATFEM as the feature extractor, DSP-EmotionNet has been designed, significantly improving the accuracy of the model in cross-subject EEG emotion recognition tasks. The proposed model surpasses state-of-the-art methods in cross-subject EEG emotion recognition tasks, achieving an average recognition accuracy of 82.5% on the SEED dataset and 65.9% on the SEED-IV dataset.
ISSN:1662-5161