Skeletal Keypoint-Based Transformer Model for Human Action Recognition in Aerial Videos

Several efforts have been made to develop effective and robust vision-based solutions for human action recognition in aerial videos. Generally, the existing methods rely on the extraction of either spatial features (patch-based methods) or skeletal key points (pose-based methods) that are fed to a c...

Full description

Saved in:
Bibliographic Details
Main Authors: Shahab Uddin, Tahir Nawaz, James Ferryman, Nasir Rashid, Md. Asaduzzaman, Raheel Nawaz
Format: Article
Language:English
Published: IEEE 2024-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10400454/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Several efforts have been made to develop effective and robust vision-based solutions for human action recognition in aerial videos. Generally, the existing methods rely on the extraction of either spatial features (patch-based methods) or skeletal key points (pose-based methods) that are fed to a classifier. Unlike the patch-based methods, the pose-based methods are generally regarded to be more robust to background changes and computationally efficient. Moreover, at the classification stage, the use of deep networks has generated significant interest within the community; however, the need remains to develop accurate and computationally effective deep learning-based solutions. To this end, this paper proposes a lightweight Transformer network-based method for human action recognition in aerial videos using the skeletal keypoints extracted using YOLOv8. The effectiveness of the proposed method is shown on a well-known public dataset containing 13 action classes, achieving very encouraging performance in terms of accuracy and computational cost as compared to several existing related methods.
ISSN:2169-3536