STAFNet: an adaptive multi-feature learning network via spatiotemporal fusion for EEG-based emotion recognition

IntroductionEmotion recognition using electroencephalography (EEG) is a key aspect of brain-computer interface research. Achieving precision requires effectively extracting and integrating both spatial and temporal features. However, many studies focus on a single dimension, neglecting the interplay...

Full description

Saved in:
Bibliographic Details
Main Authors: Fo Hu, Kailun He, Mengyuan Qian, Xiaofeng Liu, Zukang Qiao, Lekai Zhang, Junlong Xiong
Format: Article
Language:English
Published: Frontiers Media S.A. 2024-12-01
Series:Frontiers in Neuroscience
Subjects:
Online Access:https://www.frontiersin.org/articles/10.3389/fnins.2024.1519970/full
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1846129325180452864
author Fo Hu
Kailun He
Mengyuan Qian
Xiaofeng Liu
Zukang Qiao
Lekai Zhang
Junlong Xiong
author_facet Fo Hu
Kailun He
Mengyuan Qian
Xiaofeng Liu
Zukang Qiao
Lekai Zhang
Junlong Xiong
author_sort Fo Hu
collection DOAJ
description IntroductionEmotion recognition using electroencephalography (EEG) is a key aspect of brain-computer interface research. Achieving precision requires effectively extracting and integrating both spatial and temporal features. However, many studies focus on a single dimension, neglecting the interplay and complementarity of multi-feature information, and the importance of fully integrating spatial and temporal dynamics to enhance performance.MethodsWe propose the Spatiotemporal Adaptive Fusion Network (STAFNet), a novel framework combining adaptive graph convolution and temporal transformers to enhance the accuracy and robustness of EEG-based emotion recognition. The model includes an adaptive graph convolutional module to capture brain connectivity patterns through spatial dynamic evolution and a multi-structured transformer fusion module to integrate latent correlations between spatial and temporal features for emotion classification.ResultsExtensive experiments were conducted on the SEED and SEED-IV datasets to evaluate the performance of STAFNet. The model achieved accuracies of 97.89% and 93.64%, respectively, outperforming state-of-the-art methods. Interpretability analyses, including confusion matrices and t-SNE visualizations, were employed to examine the influence of different emotions on the model's recognition performance. Furthermore, an investigation of varying GCN layer depths demonstrated that STAFNet effectively mitigates the over-smoothing issue in deeper GCN architectures.DiscussionIn summary, the findings validate the effectiveness of STAFNet in EEG-based emotion recognition. The results emphasize the critical role of spatiotemporal feature extraction and introduce an innovative framework for feature fusion, advancing the state of the art in emotion recognition.
format Article
id doaj-art-7ca728a2e0794dde896c5eb4083e3c6f
institution Kabale University
issn 1662-453X
language English
publishDate 2024-12-01
publisher Frontiers Media S.A.
record_format Article
series Frontiers in Neuroscience
spelling doaj-art-7ca728a2e0794dde896c5eb4083e3c6f2024-12-10T06:33:51ZengFrontiers Media S.A.Frontiers in Neuroscience1662-453X2024-12-011810.3389/fnins.2024.15199701519970STAFNet: an adaptive multi-feature learning network via spatiotemporal fusion for EEG-based emotion recognitionFo Hu0Kailun He1Mengyuan Qian2Xiaofeng Liu3Zukang Qiao4Lekai Zhang5Junlong Xiong6College of Information Engineering, Zhejiang University of Technology, Hangzhou, ChinaCollege of Information Engineering, Zhejiang University of Technology, Hangzhou, ChinaCollege of Information Engineering, Zhejiang University of Technology, Hangzhou, ChinaCollege of Information Engineering, Zhejiang University of Technology, Hangzhou, ChinaDepartment of Tuina, The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Chinese Medicine), Hangzhou, ChinaThe School of Design and Architecture, Zhejiang University of Technology, Hangzhou, ChinaDepartment of Tuina, The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Chinese Medicine), Hangzhou, ChinaIntroductionEmotion recognition using electroencephalography (EEG) is a key aspect of brain-computer interface research. Achieving precision requires effectively extracting and integrating both spatial and temporal features. However, many studies focus on a single dimension, neglecting the interplay and complementarity of multi-feature information, and the importance of fully integrating spatial and temporal dynamics to enhance performance.MethodsWe propose the Spatiotemporal Adaptive Fusion Network (STAFNet), a novel framework combining adaptive graph convolution and temporal transformers to enhance the accuracy and robustness of EEG-based emotion recognition. The model includes an adaptive graph convolutional module to capture brain connectivity patterns through spatial dynamic evolution and a multi-structured transformer fusion module to integrate latent correlations between spatial and temporal features for emotion classification.ResultsExtensive experiments were conducted on the SEED and SEED-IV datasets to evaluate the performance of STAFNet. The model achieved accuracies of 97.89% and 93.64%, respectively, outperforming state-of-the-art methods. Interpretability analyses, including confusion matrices and t-SNE visualizations, were employed to examine the influence of different emotions on the model's recognition performance. Furthermore, an investigation of varying GCN layer depths demonstrated that STAFNet effectively mitigates the over-smoothing issue in deeper GCN architectures.DiscussionIn summary, the findings validate the effectiveness of STAFNet in EEG-based emotion recognition. The results emphasize the critical role of spatiotemporal feature extraction and introduce an innovative framework for feature fusion, advancing the state of the art in emotion recognition.https://www.frontiersin.org/articles/10.3389/fnins.2024.1519970/fullEEGemotion recognitiondeep learningspatiotemporal fusionadaptive adjacency matrix
spellingShingle Fo Hu
Kailun He
Mengyuan Qian
Xiaofeng Liu
Zukang Qiao
Lekai Zhang
Junlong Xiong
STAFNet: an adaptive multi-feature learning network via spatiotemporal fusion for EEG-based emotion recognition
Frontiers in Neuroscience
EEG
emotion recognition
deep learning
spatiotemporal fusion
adaptive adjacency matrix
title STAFNet: an adaptive multi-feature learning network via spatiotemporal fusion for EEG-based emotion recognition
title_full STAFNet: an adaptive multi-feature learning network via spatiotemporal fusion for EEG-based emotion recognition
title_fullStr STAFNet: an adaptive multi-feature learning network via spatiotemporal fusion for EEG-based emotion recognition
title_full_unstemmed STAFNet: an adaptive multi-feature learning network via spatiotemporal fusion for EEG-based emotion recognition
title_short STAFNet: an adaptive multi-feature learning network via spatiotemporal fusion for EEG-based emotion recognition
title_sort stafnet an adaptive multi feature learning network via spatiotemporal fusion for eeg based emotion recognition
topic EEG
emotion recognition
deep learning
spatiotemporal fusion
adaptive adjacency matrix
url https://www.frontiersin.org/articles/10.3389/fnins.2024.1519970/full
work_keys_str_mv AT fohu stafnetanadaptivemultifeaturelearningnetworkviaspatiotemporalfusionforeegbasedemotionrecognition
AT kailunhe stafnetanadaptivemultifeaturelearningnetworkviaspatiotemporalfusionforeegbasedemotionrecognition
AT mengyuanqian stafnetanadaptivemultifeaturelearningnetworkviaspatiotemporalfusionforeegbasedemotionrecognition
AT xiaofengliu stafnetanadaptivemultifeaturelearningnetworkviaspatiotemporalfusionforeegbasedemotionrecognition
AT zukangqiao stafnetanadaptivemultifeaturelearningnetworkviaspatiotemporalfusionforeegbasedemotionrecognition
AT lekaizhang stafnetanadaptivemultifeaturelearningnetworkviaspatiotemporalfusionforeegbasedemotionrecognition
AT junlongxiong stafnetanadaptivemultifeaturelearningnetworkviaspatiotemporalfusionforeegbasedemotionrecognition