Distributionally Robust Policy and Lyapunov-Certificate Learning

This article presents novel methods for synthesizing distributionally robust stabilizing neural controllers and certificates for control systems under model uncertainty. A key challenge in designing controllers with stability guarantees for uncertain systems is the accurate determination of and adap...

Full description

Saved in:
Bibliographic Details
Main Authors: Kehan Long, Jorge Cortes, Nikolay Atanasov
Format: Article
Language:English
Published: IEEE 2024-01-01
Series:IEEE Open Journal of Control Systems
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10629071/
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1841554010520158208
author Kehan Long
Jorge Cortes
Nikolay Atanasov
author_facet Kehan Long
Jorge Cortes
Nikolay Atanasov
author_sort Kehan Long
collection DOAJ
description This article presents novel methods for synthesizing distributionally robust stabilizing neural controllers and certificates for control systems under model uncertainty. A key challenge in designing controllers with stability guarantees for uncertain systems is the accurate determination of and adaptation to shifts in model parametric uncertainty during online deployment. We tackle this with a novel distributionally robust formulation of the Lyapunov derivative chance constraint ensuring a monotonic decrease of the Lyapunov certificate. To avoid the computational complexity involved in dealing with the space of probability measures, we identify a sufficient condition in the form of deterministic convex constraints that ensures the Lyapunov derivative constraint is satisfied. We integrate this condition into a loss function for training a neural network-based controller and show that, for the resulting closed-loop system, the global asymptotic stability of its equilibrium can be certified with high confidence, even with Out-of-Distribution (OoD) model uncertainties. To demonstrate the efficacy and efficiency of the proposed methodology, we compare it with an uncertainty-agnostic baseline approach and several reinforcement learning approaches in two control problems in simulation. Open-source implementations of the examples are available at <uri>https://github.com/KehanLong/DR_Stabilizing_Policy</uri>.
format Article
id doaj-art-02e405ca43c3491c98921cc92745dedf
institution Kabale University
issn 2694-085X
language English
publishDate 2024-01-01
publisher IEEE
record_format Article
series IEEE Open Journal of Control Systems
spelling doaj-art-02e405ca43c3491c98921cc92745dedf2025-01-09T00:03:04ZengIEEEIEEE Open Journal of Control Systems2694-085X2024-01-01337538810.1109/OJCSYS.2024.344005110629071Distributionally Robust Policy and Lyapunov-Certificate LearningKehan Long0https://orcid.org/0000-0003-2839-7188Jorge Cortes1https://orcid.org/0000-0001-9582-5184Nikolay Atanasov2https://orcid.org/0000-0003-0272-7580Contextual Robotics Institute, University of California San Diego, La Jolla, CA, USAContextual Robotics Institute, University of California San Diego, La Jolla, CA, USAContextual Robotics Institute, University of California San Diego, La Jolla, CA, USAThis article presents novel methods for synthesizing distributionally robust stabilizing neural controllers and certificates for control systems under model uncertainty. A key challenge in designing controllers with stability guarantees for uncertain systems is the accurate determination of and adaptation to shifts in model parametric uncertainty during online deployment. We tackle this with a novel distributionally robust formulation of the Lyapunov derivative chance constraint ensuring a monotonic decrease of the Lyapunov certificate. To avoid the computational complexity involved in dealing with the space of probability measures, we identify a sufficient condition in the form of deterministic convex constraints that ensures the Lyapunov derivative constraint is satisfied. We integrate this condition into a loss function for training a neural network-based controller and show that, for the resulting closed-loop system, the global asymptotic stability of its equilibrium can be certified with high confidence, even with Out-of-Distribution (OoD) model uncertainties. To demonstrate the efficacy and efficiency of the proposed methodology, we compare it with an uncertainty-agnostic baseline approach and several reinforcement learning approaches in two control problems in simulation. Open-source implementations of the examples are available at <uri>https://github.com/KehanLong/DR_Stabilizing_Policy</uri>.https://ieeexplore.ieee.org/document/10629071/Learning for controlLyapunov methodsoptimization under uncertaintystability of nonlinear systems
spellingShingle Kehan Long
Jorge Cortes
Nikolay Atanasov
Distributionally Robust Policy and Lyapunov-Certificate Learning
IEEE Open Journal of Control Systems
Learning for control
Lyapunov methods
optimization under uncertainty
stability of nonlinear systems
title Distributionally Robust Policy and Lyapunov-Certificate Learning
title_full Distributionally Robust Policy and Lyapunov-Certificate Learning
title_fullStr Distributionally Robust Policy and Lyapunov-Certificate Learning
title_full_unstemmed Distributionally Robust Policy and Lyapunov-Certificate Learning
title_short Distributionally Robust Policy and Lyapunov-Certificate Learning
title_sort distributionally robust policy and lyapunov certificate learning
topic Learning for control
Lyapunov methods
optimization under uncertainty
stability of nonlinear systems
url https://ieeexplore.ieee.org/document/10629071/
work_keys_str_mv AT kehanlong distributionallyrobustpolicyandlyapunovcertificatelearning
AT jorgecortes distributionallyrobustpolicyandlyapunovcertificatelearning
AT nikolayatanasov distributionallyrobustpolicyandlyapunovcertificatelearning