A pedestrian group crossing intention prediction model integrating spatiotemporal features

Abstract Pedestrians, as vulnerable road users (VRUs), lack effective protective measures in traffic accidents, making them highly susceptible to injuries. Accurate prediction of pedestrian behavior is crucial for road safety, traffic management systems, advanced driver-assistance systems (ADAS), an...

Full description

Saved in:
Bibliographic Details
Main Authors: Hai Zou, Yongqing Guo, Fulu Wei, Dong Guo, Qingyin Li, Jahongir Pirov
Format: Article
Language:English
Published: Nature Portfolio 2025-07-01
Series:Scientific Reports
Subjects:
Online Access:https://doi.org/10.1038/s41598-025-05128-4
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract Pedestrians, as vulnerable road users (VRUs), lack effective protective measures in traffic accidents, making them highly susceptible to injuries. Accurate prediction of pedestrian behavior is crucial for road safety, traffic management systems, advanced driver-assistance systems (ADAS), and autonomous vehicle development. Against this backdrop, this paper proposes a pedestrian group crossing intention prediction model that integrates spatiotemporal features to enhance the accuracy of pedestrian behavior prediction in autonomous driving scenarios. The model combines spatiotemporal features, including pedestrian pose key points, 2D positional trajectories, and group information. Experimental results on the JAADbeh and JAADall datasets demonstrate that the proposed model achieves superior performance in terms of accuracy, precision, and F1-Score. Notably, on the complex and large-scale JAADall dataset, the model achieves an accuracy of 0.82, highlighting its robustness. Furthermore, the findings reveal that incorporating pedestrian group information improves prediction accuracy, especially in group pedestrian scenarios, where this feature significantly enhances detection performance. This study provides a reliable pedestrian intention prediction framework for autonomous driving and intelligent transportation systems, while also laying a foundation for future exploration of diverse non-visual features and complex scenarios.
ISSN:2045-2322