Laparoscopic Suture Gestures Recognition via Machine Learning: A Method for Validation of Kinematic Features Selection

In minimally invasive surgery, robotics integration has been crucial, with a current focus on developing collaborative algorithms to reduce surgeons’ workload. Effective human-robot collaboration requires robots to perceive surgeons’ gestures during interventions for appropriat...

Full description

Saved in:
Bibliographic Details
Main Authors: Juan M. Herrera-Lopez, Alvaro Galan-Cuenca, Antonio J. Reina, Isabel Garcia-Morales, Victor F. Munoz
Format: Article
Language:English
Published: IEEE 2024-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10799090/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In minimally invasive surgery, robotics integration has been crucial, with a current focus on developing collaborative algorithms to reduce surgeons’ workload. Effective human-robot collaboration requires robots to perceive surgeons’ gestures during interventions for appropriate assistance. Research in this task has utilized both image data, mainly using Deep Learning and Convolutional Neural Networks, and kinematic data extracted from the surgeons’ instruments, processing kinematic sequences with Markov models, Recurrent Neural Networks and even unsupervised learning techniques. However, most studies that develop recognition models with kinematic data do not take into account any study of the significance that each kinematic variable plays in the recognition task, allowing for informed decisions at the time of training simpler models and choosing the sensor systems in deployment platforms. For that purpose, this work models the laparoscopic suturing manoeuvre as a set of simpler gestures to be recognized and, using the ReliefF algorithm on the JIGSAWS dataset’s kinematic data, presents a study of significance of the different kinematic variables. To validate this study, three classification models based on the multilayer perceptron and on Hidden Markov Models have been trained using both the complete set of variables and a reduced selection including only the most significant. The results show that the aperture angle and orientation of the surgical tools retain enough information about the chosen gestures that the accuracy does not vary between equivalent models by more than 5.84% in any case.
ISSN:2169-3536