Stable Inverse Reinforcement Learning: Policies From Control Lyapunov Landscapes
Learning from expert demonstrations to flexibly program an autonomous system with complex behaviors or to predict an agent's behavior is a powerful tool, especially in collaborative control settings. A common method to solve this problem is inverse reinforcement learning (IRL), where the...
Saved in:
Main Authors: | SAMUEL TESFAZGI, Leonhard Sprandl, Armin Lederer, Sandra Hirche |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2024-01-01
|
Series: | IEEE Open Journal of Control Systems |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/10643266/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
On direct and inverse problems related to some dilated sumsets
by: Kaur, Ramandeep, et al.
Published: (2024-02-01) -
Distributionally Robust Policy and Lyapunov-Certificate Learning
by: Kehan Long, et al.
Published: (2024-01-01) -
Inverse design of nanophotonic devices enabled by optimization algorithms and deep learning: recent achievements and future prospects
by: Kim Junhyeong, et al.
Published: (2025-01-01) -
On the cardinality of subsequence sums II
by: Jiang, Xing-Wang, et al.
Published: (2024-11-01) -
Event-Trigger Reinforcement Learning-Based Coordinate Control of Modular Unmanned System via Nonzero-Sum Game
by: Yebao Liu, et al.
Published: (2025-01-01)