Global Multi-Phase Path Planning Through High-Level Reinforcement Learning

In this paper, we introduce the <italic>Global Multi-Phase Path Planning</italic> (<monospace><inline-formula><tex-math notation="LaTeX">$GMP^{3}$</tex-math></inline-formula></monospace>) algorithm in planner problems, which computes fast and...

Full description

Saved in:
Bibliographic Details
Main Authors: Babak Salamat, Sebastian-Sven Olzem, Gerhard Elsbacher, Andrea M. Tonello
Format: Article
Language:English
Published: IEEE 2024-01-01
Series:IEEE Open Journal of Control Systems
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10613437/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In this paper, we introduce the <italic>Global Multi-Phase Path Planning</italic> (<monospace><inline-formula><tex-math notation="LaTeX">$GMP^{3}$</tex-math></inline-formula></monospace>) algorithm in planner problems, which computes fast and feasible trajectories in environments with obstacles, considering physical and kinematic constraints. Our approach utilizes a Markov Decision Process (MDP) framework and high-level reinforcement learning techniques to ensure trajectory smoothness, continuity, and compliance with constraints. Through extensive simulations, we demonstrate the algorithm&#x0027;s effectiveness and efficiency across various scenarios. We highlight existing path planning challenges, particularly in integrating dynamic adaptability and computational efficiency. The results validate our method&#x0027;s convergence guarantees using Lyapunov&#x2019;s stability theorem and underscore its computational advantages.
ISSN:2694-085X