Deep reinforcement learning and fuzzy logic controller codesign for energy management of hydrogen fuel cell powered electric vehicles

Abstract Hydrogen-based electric vehicles such as Fuel Cell Electric Vehicles (FCHEVs) play an important role in producing zero carbon emissions and in reducing the pressure from the fuel economy crisis, simultaneously. This paper aims to address the energy management design for various performance...

Full description

Saved in:
Bibliographic Details
Main Authors: Seyed Mehdi Rakhtala Rostami, Zeyad Al-Shibaany, Peter Kay, Hamid Reza Karimi
Format: Article
Language:English
Published: Nature Portfolio 2024-12-01
Series:Scientific Reports
Subjects:
Online Access:https://doi.org/10.1038/s41598-024-81769-1
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract Hydrogen-based electric vehicles such as Fuel Cell Electric Vehicles (FCHEVs) play an important role in producing zero carbon emissions and in reducing the pressure from the fuel economy crisis, simultaneously. This paper aims to address the energy management design for various performance metrics, such as power tracking and system accuracy, fuel cell lifetime, battery lifetime, and reduction of transient and peak current on Polymer Electrolyte Membrane Fuel Cell (PEMFC) and Li-ion batteries. The proposed algorithm includes a combination of reinforcement learning algorithms in low-level control loops and high-level supervisory control based on fuzzy logic load sharing, which is implemented in the system under consideration. More specifically, this research paper establishes a power system model with three DC-DC converters, which includes a hierarchical energy management framework employed in a two-layer control strategy. Three loop control strategies for hybrid electric vehicles based on reinforcement learning are designed in the low-level layer control strategy. The Deep Deterministic Policy Gradient with Twin Delayed (DDPG TD3) is used with a network. Three DRL controllers are designed using the hierarchical energy optimization control architecture. The comparative results between the two strategies, Deep Reinforcement Learning and Fuzzy logic supervisory control (DRL-F) and Super-Twisting algorithm and Fuzzy logic supervisory control (STW-F) under the EUDC driving cycle indicate that the proposed model DRL-F can ensure the Root Mean Square Error (RMSE) reduction for 21.05% compared to the STW-F and the Mean Error reduction for 8.31% compared to the STW-F method. The results demonstrate a more robust, accurate and precise system alongside uncertainties and disturbances in the Energy Management System (EMS) of FCHEV based on an advanced learning method.
ISSN:2045-2322