Optimizing Joint Bidding and Incentivizing Strategy for Price-Maker Load Aggregators Based on Multi-Task Multi-Agent Deep Reinforcement Learning
The increasing penetration of renewable energy sources poses significant challenges for modern power systems, particularly in supply-demand balance and peak regulation. Load aggregators (LAs) play a crucial role by integrating small to medium-sized loads and coordinating demand response (DR). Howeve...
Saved in:
| Main Authors: | , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IEEE
2024-01-01
|
| Series: | IEEE Access |
| Subjects: | |
| Online Access: | https://ieeexplore.ieee.org/document/10742324/ |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | The increasing penetration of renewable energy sources poses significant challenges for modern power systems, particularly in supply-demand balance and peak regulation. Load aggregators (LAs) play a crucial role by integrating small to medium-sized loads and coordinating demand response (DR). However, previous research works ignored the inherent coupling between price-maker LAs’ decision-making of bidding price and quantity in the ancillary service market and decision-making of incentive price in DR. This study introduces a joint bidding and incentivizing model for a price-maker LA participating in a peak-regulation ancillary service market (PRM) and developing an incentive-based demand response (IBDR), where the LA’s objective is to maximize its long-term cumulative payoff. In order to solve this complex joint decision-making optimization problem more effectively and efficiently, a model-free multi-task multi-agent deep reinforcement learning-based (MTMA-DRL-based) method incorporating a shared, centralized prioritized experience replay buffer (PERB) is proposed. Case studies in real-world settings confirm that the proposed model effectively captures the interdependence between bidding price, bidding quantity, and incentive price decisions. The proposed MTMA-DRL-based method is also proven to outperform existing methods. |
|---|---|
| ISSN: | 2169-3536 |