Volume-weighted Bellman error method for adaptive meshing in approximate dynamic programming
Optimal control and reinforcement learning have an associate “value function” which must be suitably approximated. Value function approximation problems usually have different precision requirements in different regions of the state space. An uniform gridding wastes resources in regions in which the...
Saved in:
Main Authors: | Leopoldo Armesto, Antonio Sala |
---|---|
Format: | Article |
Language: | Spanish |
Published: |
Universitat Politècnica de València
2021-12-01
|
Series: | Revista Iberoamericana de Automática e Informática Industrial RIAI |
Subjects: | |
Online Access: | https://polipapers.upv.es/index.php/RIAI/article/view/15698 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Approximate Dynamic Programming Methodology for Data-based Optimal Controllers
by: Henry Díaz, et al.
Published: (2019-06-01) -
Optimización Bayesiana no miope POMDP para procesos con restricciones de operación y presupuesto finito
by: José Luis Pitarch, et al.
Published: (2024-07-01) -
Multimodal Control in Uncertain Environments using Reinforcement Learning and Gaussian Processes
by: Mariano De Paula, et al.
Published: (2015-10-01) -
Análisis de rendimiento del rechazo de perturbaciones en controladores cuadráticos lineales: un método práctico de sintonía adaptativo
by: Igor M. L. Pataro, et al.
Published: (2023-11-01) -
Análisis diferencial técnico-económico de los sistemas productivos de guajolotes en el Estado de México
by: Gabriela Rodríguez-Licea, et al.
Published: (2017-01-01)