Neonatal pose estimation in the unaltered clinical environment with fusion of RGB, depth and IR images
Abstract Visual monitoring of pre-term infants in intensive care is critical to ensuring proper development and treatment. Camera systems have been explored for this purpose, with human pose estimation having applications in monitoring position, motion, behaviour and vital signs. Validation in the f...
Saved in:
| Main Authors: | , , , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Nature Portfolio
2025-08-01
|
| Series: | npj Digital Medicine |
| Online Access: | https://doi.org/10.1038/s41746-025-01929-z |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Abstract Visual monitoring of pre-term infants in intensive care is critical to ensuring proper development and treatment. Camera systems have been explored for this purpose, with human pose estimation having applications in monitoring position, motion, behaviour and vital signs. Validation in the full range of clinical visual scenarios is necessary to prove real-life utility. We conducted a clinical study to collect RGB, depth and infra-red video from 24 participants with no modifications to clinical care. We propose and train image fusion pose estimation algorithms for locating the torso key-points. Our best-performing approach, a late fusion method, achieves an average precision score of 0.811. Chest covering or side lying decrease the object key-point similarity score by 0.15 and 0.1 respectively, while accounting for 50% and 44% of the time. The baby’s positioning and covering supports their development and comfort, and these scenarios should therefore be considered when validating visual monitoring algorithms. |
|---|---|
| ISSN: | 2398-6352 |