A reinforcement learning strategy to automate and accelerate h/p-multigrid solvers

We explore a reinforcement learning strategy to automate and accelerate h/p-multigrid methods in high-order solvers. Multigrid methods are very efficient but require fine-tuning of numerical parameters, such as the number of smoothing sweeps per level and the correction fraction (i.e., proportion of...

Full description

Saved in:
Bibliographic Details
Main Authors: David Huergo, Laura Alonso, Saumitra Joshi, Adrian Juanicotena, Gonzalo Rubio, Esteban Ferrer
Format: Article
Language:English
Published: Elsevier 2024-12-01
Series:Results in Engineering
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S2590123024012040
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1846115910841008128
author David Huergo
Laura Alonso
Saumitra Joshi
Adrian Juanicotena
Gonzalo Rubio
Esteban Ferrer
author_facet David Huergo
Laura Alonso
Saumitra Joshi
Adrian Juanicotena
Gonzalo Rubio
Esteban Ferrer
author_sort David Huergo
collection DOAJ
description We explore a reinforcement learning strategy to automate and accelerate h/p-multigrid methods in high-order solvers. Multigrid methods are very efficient but require fine-tuning of numerical parameters, such as the number of smoothing sweeps per level and the correction fraction (i.e., proportion of the corrected solution that is transferred from a coarser grid to a finer grid). The objective of this paper is to use a proximal policy optimization algorithm to automatically tune the multigrid parameters and, by doing so, improve stability and efficiency of the h/p-multigrid strategy.Our findings reveal that the proposed reinforcement learning h/p-multigrid approach significantly accelerates and improves the robustness of steady-state simulations for one-dimensional advection-diffusion and nonlinear Burgers' equations, when discretized using high-order h/p methods, on uniform and nonuniform grids.
format Article
id doaj-art-c987ec13c4374fc8b9ef3c258b049202
institution Kabale University
issn 2590-1230
language English
publishDate 2024-12-01
publisher Elsevier
record_format Article
series Results in Engineering
spelling doaj-art-c987ec13c4374fc8b9ef3c258b0492022024-12-19T10:57:32ZengElsevierResults in Engineering2590-12302024-12-0124102949A reinforcement learning strategy to automate and accelerate h/p-multigrid solversDavid Huergo0Laura Alonso1Saumitra Joshi2Adrian Juanicotena3Gonzalo Rubio4Esteban Ferrer5ETSIAE-UPM-School of Aeronautics, Universidad Politécnica de Madrid, Plaza Cardenal Cisneros 3, E-28040 Madrid, Spain; Corresponding author.ETSIAE-UPM-School of Aeronautics, Universidad Politécnica de Madrid, Plaza Cardenal Cisneros 3, E-28040 Madrid, SpainETSIAE-UPM-School of Aeronautics, Universidad Politécnica de Madrid, Plaza Cardenal Cisneros 3, E-28040 Madrid, SpainETSIAE-UPM-School of Aeronautics, Universidad Politécnica de Madrid, Plaza Cardenal Cisneros 3, E-28040 Madrid, SpainETSIAE-UPM-School of Aeronautics, Universidad Politécnica de Madrid, Plaza Cardenal Cisneros 3, E-28040 Madrid, Spain; Center for Computational Simulation, Universidad Politécnica de Madrid, Campus de Montegancedo, Boadilla del Monte, 28660 Madrid, SpainETSIAE-UPM-School of Aeronautics, Universidad Politécnica de Madrid, Plaza Cardenal Cisneros 3, E-28040 Madrid, Spain; Center for Computational Simulation, Universidad Politécnica de Madrid, Campus de Montegancedo, Boadilla del Monte, 28660 Madrid, SpainWe explore a reinforcement learning strategy to automate and accelerate h/p-multigrid methods in high-order solvers. Multigrid methods are very efficient but require fine-tuning of numerical parameters, such as the number of smoothing sweeps per level and the correction fraction (i.e., proportion of the corrected solution that is transferred from a coarser grid to a finer grid). The objective of this paper is to use a proximal policy optimization algorithm to automatically tune the multigrid parameters and, by doing so, improve stability and efficiency of the h/p-multigrid strategy.Our findings reveal that the proposed reinforcement learning h/p-multigrid approach significantly accelerates and improves the robustness of steady-state simulations for one-dimensional advection-diffusion and nonlinear Burgers' equations, when discretized using high-order h/p methods, on uniform and nonuniform grids.http://www.sciencedirect.com/science/article/pii/S2590123024012040Reinforcement learningProximal policy optimizationPPOAdvection-diffusionBurgers' equationHigh-order flux reconstruction
spellingShingle David Huergo
Laura Alonso
Saumitra Joshi
Adrian Juanicotena
Gonzalo Rubio
Esteban Ferrer
A reinforcement learning strategy to automate and accelerate h/p-multigrid solvers
Results in Engineering
Reinforcement learning
Proximal policy optimization
PPO
Advection-diffusion
Burgers' equation
High-order flux reconstruction
title A reinforcement learning strategy to automate and accelerate h/p-multigrid solvers
title_full A reinforcement learning strategy to automate and accelerate h/p-multigrid solvers
title_fullStr A reinforcement learning strategy to automate and accelerate h/p-multigrid solvers
title_full_unstemmed A reinforcement learning strategy to automate and accelerate h/p-multigrid solvers
title_short A reinforcement learning strategy to automate and accelerate h/p-multigrid solvers
title_sort reinforcement learning strategy to automate and accelerate h p multigrid solvers
topic Reinforcement learning
Proximal policy optimization
PPO
Advection-diffusion
Burgers' equation
High-order flux reconstruction
url http://www.sciencedirect.com/science/article/pii/S2590123024012040
work_keys_str_mv AT davidhuergo areinforcementlearningstrategytoautomateandacceleratehpmultigridsolvers
AT lauraalonso areinforcementlearningstrategytoautomateandacceleratehpmultigridsolvers
AT saumitrajoshi areinforcementlearningstrategytoautomateandacceleratehpmultigridsolvers
AT adrianjuanicotena areinforcementlearningstrategytoautomateandacceleratehpmultigridsolvers
AT gonzalorubio areinforcementlearningstrategytoautomateandacceleratehpmultigridsolvers
AT estebanferrer areinforcementlearningstrategytoautomateandacceleratehpmultigridsolvers
AT davidhuergo reinforcementlearningstrategytoautomateandacceleratehpmultigridsolvers
AT lauraalonso reinforcementlearningstrategytoautomateandacceleratehpmultigridsolvers
AT saumitrajoshi reinforcementlearningstrategytoautomateandacceleratehpmultigridsolvers
AT adrianjuanicotena reinforcementlearningstrategytoautomateandacceleratehpmultigridsolvers
AT gonzalorubio reinforcementlearningstrategytoautomateandacceleratehpmultigridsolvers
AT estebanferrer reinforcementlearningstrategytoautomateandacceleratehpmultigridsolvers