Semantics-Assisted Training Graph Convolution Network for Skeleton-Based Action Recognition
The skeleton-based action recognition networks often focus on extracting features such as joints from samples, while neglecting the semantic relationships inherent in actions, which also contain valuable information. To address the lack of utilization of semantic information, this paper proposes a s...
Saved in:
| Main Authors: | , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
MDPI AG
2025-03-01
|
| Series: | Sensors |
| Subjects: | |
| Online Access: | https://www.mdpi.com/1424-8220/25/6/1841 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | The skeleton-based action recognition networks often focus on extracting features such as joints from samples, while neglecting the semantic relationships inherent in actions, which also contain valuable information. To address the lack of utilization of semantic information, this paper proposes a semantics-assisted training graph convolution network (SAT-GCN). By dividing the features outputted by the skeleton encoder into four parts and contrasting them with the text features generated by the text encoder, the obtained contrastive loss is used to guide the overall network training. This approach effectively improves recognition accuracy while reducing the number of model parameters. In addition, angle features are incorporated into the skeleton model input to aid in classifying similar actions. Finally, a multi-feature skeleton encoder is designed to separately extract features such as joints, bones, and angles. These extracted features are then integrated through feature fusion. The fused features are then passed through three graph convolution blocks before being fed into fully connected layers for classification. Extensive experiments were conducted on three large-scale datasets, NTU RGB + D 60, NTU RGB + D 120, and NW-UCLA to validate the performance of the proposed model. The results show that the SAT-GCN outperforms others in terms of both accuracy and number of parameters. |
|---|---|
| ISSN: | 1424-8220 |