Inverse Kinematics of a 7-Degree-of-Freedom Robotic Arm Based on Deep Reinforcement Learning and Damped Least Squares

As we advance towards the future of the smart manufacturing industry, our research focuses on enhancing manipulator technology. Inverse kinematics is a key component of robotic arm control, yet many existing methods struggle to achieve high performance when dealing with high-precision target points...

Full description

Saved in:
Bibliographic Details
Main Authors: Shusheng Yu, Gongquan Tan
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10812731/
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1841550769716723712
author Shusheng Yu
Gongquan Tan
author_facet Shusheng Yu
Gongquan Tan
author_sort Shusheng Yu
collection DOAJ
description As we advance towards the future of the smart manufacturing industry, our research focuses on enhancing manipulator technology. Inverse kinematics is a key component of robotic arm control, yet many existing methods struggle to achieve high performance when dealing with high-precision target points and highly redundant robotic arms. In this paper, we propose a novel solution to the inverse kinematics problem by combining Proximal Policy Optimization (PPO) with the Damped Least Squares (DLS) method, forming the Multistep PPO-DLS Inverse Kinematics (MPDIK) algorithm. The algorithm was trained and tested in the PyBullet virtual environment, using random seven-dimensional position and pose target points. The MPDIK algorithm demonstrated outstanding performance, with the end effector achieving a distance error of less than 0.1 mm and an orientation error of less than 0.001°. Additionally, it exhibited excellent stability and fast convergence, with a post-training task completion success rate of 98.37% and an average of 20.68 time steps per task. This represents a significant improvement over existing methods, such as PPO and DLS, and demonstrates universal applicability. Our experiments also revealed that this method holds great potential for improving both the accuracy and real-time application capabilities of robotic systems.
format Article
id doaj-art-679ee60020904d03952888bbf15a985a
institution Kabale University
issn 2169-3536
language English
publishDate 2025-01-01
publisher IEEE
record_format Article
series IEEE Access
spelling doaj-art-679ee60020904d03952888bbf15a985a2025-01-10T00:01:38ZengIEEEIEEE Access2169-35362025-01-01134857486810.1109/ACCESS.2024.352153910812731Inverse Kinematics of a 7-Degree-of-Freedom Robotic Arm Based on Deep Reinforcement Learning and Damped Least SquaresShusheng Yu0https://orcid.org/0009-0009-1626-7942Gongquan Tan1School of Automation and Information Engineering, Sichuan University of Science and Engineering, Yibin, Sichuan, ChinaSchool of Automation and Information Engineering, Sichuan University of Science and Engineering, Yibin, Sichuan, ChinaAs we advance towards the future of the smart manufacturing industry, our research focuses on enhancing manipulator technology. Inverse kinematics is a key component of robotic arm control, yet many existing methods struggle to achieve high performance when dealing with high-precision target points and highly redundant robotic arms. In this paper, we propose a novel solution to the inverse kinematics problem by combining Proximal Policy Optimization (PPO) with the Damped Least Squares (DLS) method, forming the Multistep PPO-DLS Inverse Kinematics (MPDIK) algorithm. The algorithm was trained and tested in the PyBullet virtual environment, using random seven-dimensional position and pose target points. The MPDIK algorithm demonstrated outstanding performance, with the end effector achieving a distance error of less than 0.1 mm and an orientation error of less than 0.001°. Additionally, it exhibited excellent stability and fast convergence, with a post-training task completion success rate of 98.37% and an average of 20.68 time steps per task. This represents a significant improvement over existing methods, such as PPO and DLS, and demonstrates universal applicability. Our experiments also revealed that this method holds great potential for improving both the accuracy and real-time application capabilities of robotic systems.https://ieeexplore.ieee.org/document/10812731/Manipulatorsrobot kinematicsreinforcement learningartificial intelligence
spellingShingle Shusheng Yu
Gongquan Tan
Inverse Kinematics of a 7-Degree-of-Freedom Robotic Arm Based on Deep Reinforcement Learning and Damped Least Squares
IEEE Access
Manipulators
robot kinematics
reinforcement learning
artificial intelligence
title Inverse Kinematics of a 7-Degree-of-Freedom Robotic Arm Based on Deep Reinforcement Learning and Damped Least Squares
title_full Inverse Kinematics of a 7-Degree-of-Freedom Robotic Arm Based on Deep Reinforcement Learning and Damped Least Squares
title_fullStr Inverse Kinematics of a 7-Degree-of-Freedom Robotic Arm Based on Deep Reinforcement Learning and Damped Least Squares
title_full_unstemmed Inverse Kinematics of a 7-Degree-of-Freedom Robotic Arm Based on Deep Reinforcement Learning and Damped Least Squares
title_short Inverse Kinematics of a 7-Degree-of-Freedom Robotic Arm Based on Deep Reinforcement Learning and Damped Least Squares
title_sort inverse kinematics of a 7 degree of freedom robotic arm based on deep reinforcement learning and damped least squares
topic Manipulators
robot kinematics
reinforcement learning
artificial intelligence
url https://ieeexplore.ieee.org/document/10812731/
work_keys_str_mv AT shushengyu inversekinematicsofa7degreeoffreedomroboticarmbasedondeepreinforcementlearninganddampedleastsquares
AT gongquantan inversekinematicsofa7degreeoffreedomroboticarmbasedondeepreinforcementlearninganddampedleastsquares