Fall Detection Algorithm Using Enhanced HRNet Combined with YOLO

To address the issues of insufficient feature extraction, single-fall judgment method, and poor real-time performance of traditional fall detection algorithms in occluded scenes, a top-down fall detection algorithm based on improved YOLOv8 combined with BAM-HRNet is proposed. First, the Shufflenetv2...

Full description

Saved in:
Bibliographic Details
Main Authors: Huan Shi, Xiaopeng Wang, Jia Shi
Format: Article
Language:English
Published: MDPI AG 2025-07-01
Series:Sensors
Subjects:
Online Access:https://www.mdpi.com/1424-8220/25/13/4128
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:To address the issues of insufficient feature extraction, single-fall judgment method, and poor real-time performance of traditional fall detection algorithms in occluded scenes, a top-down fall detection algorithm based on improved YOLOv8 combined with BAM-HRNet is proposed. First, the Shufflenetv2 network is used to make the backbone of YOLOv8 light weight, and a mixed attention mechanism network is connected stage-wise at the neck to enable the network to better obtain human body position information. Second, the HRNet network integrated with the channel attention mechanism can effectively extract the position information of key points. Then, by analyzing the position information of skeletal key points, the decline speed of the center of mass, the angular velocity between the trunk and the ground, and the human body height-to-width ratio are jointly used as the discriminant basis for identifying fall behaviors. In addition, when a suspected fall is detected, the system automatically activates a voice inquiry mechanism to improve the accuracy of fall judgment. The results show that the accuracy of the object detection module on the COCO and Pascal VOC datasets is 64.1% and 61.7%, respectively. The accuracy of the key point detection module on the COCO and OCHuman datasets reaches 73.49% and 70.11%, respectively. On the fall detection datasets, the accuracy of the proposed algorithm exceeds 95% and the frame rate reaches 18.1 fps. Compared with traditional algorithms, it demonstrates superior ability to distinguish between normal and fall behaviors.
ISSN:1424-8220