Robot Task-Constrained Optimization and Adaptation with Probabilistic Movement Primitives

Enabling a robot to learn skills from a human and adapt to different task scenarios will enable the use of robots in manufacturing to improve efficiency. Movement Primitives (MPs) are prominent tools for encoding skills. This paper investigates how to learn MPs from a small number of human demonstra...

Full description

Saved in:
Bibliographic Details
Main Authors: Guanwen Ding, Xizhe Zang, Xuehe Zhang, Changle Li, Yanhe Zhu, Jie Zhao
Format: Article
Language:English
Published: MDPI AG 2024-12-01
Series:Biomimetics
Subjects:
Online Access:https://www.mdpi.com/2313-7673/9/12/738
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Enabling a robot to learn skills from a human and adapt to different task scenarios will enable the use of robots in manufacturing to improve efficiency. Movement Primitives (MPs) are prominent tools for encoding skills. This paper investigates how to learn MPs from a small number of human demonstrations and adapt to different task constraints, including waypoints, joint limits, virtual walls, and obstacles. Probabilistic Movement Primitives (ProMPs) model movements with distributions, thus providing the robot with additional freedom for task execution. We provide the robot with three modes to move, with only one human demonstration required for each mode. We propose an improved via-point generalization method to generalize smooth trajectories with encoded ProMPs. In addition, we present an effective task-constrained optimization method that incorporates all task constraints analytically into a probabilistic framework. We separate ProMPs as Gaussians at each timestep and minimize Kullback–Leibler (KL) divergence, with a gradient ascent–descent algorithm performed to obtain optimized ProMPs. Given optimized ProMPs, we outline a unified robot movement adaptation method for extending from a single obstacle to multiple obstacles. We validated our approach with a 7-DOF Xarm robot using a series of movement adaptation experiments.
ISSN:2313-7673