2025-07-09 12:00:00 2025-07-09 13:00:00 America/Indiana/Indianapolis Summer 2025 Seminar Series: Decentralized DER Integration into Wholesale Energy Markets via Reinforcement Learning of Mean-Field Games Jun He, Ph.D. Student GRIS 134

July 9, 2025

Summer 2025 Seminar Series:
Decentralized DER Integration into Wholesale Energy Markets via Reinforcement Learning of Mean-Field Games

Summer 2025 Seminar Series:
Decentralized DER Integration into Wholesale Energy Markets via Reinforcement Learning of Mean-Field Games

Event Date: July 9, 2025
Speaker: Jun He
Sponsor: Professor Andrew Liu
Time: 12:00pm
Location: GRIS 134
Priority: No
School or Program: Industrial Engineering
College Calendar: Show
Jun He
Jun He, Ph.D. Student

ABSTRACT

FERC Order 2222 facilitates the integration of distributed energy resources (DERs) into wholesale energy markets, but specific mechanisms are needed for effective prosumer participation through aggregators. Current research primarily focuses on how a single aggregator manages DER portfolios, often under the simplifying assumption that wholesale market prices, such as locational marginal prices (LMPs), are exogenously fixed. While this perspective provides valuable insights into aggregator operations, it overlooks the feedback loop where aggregator actions influence market outcomes. In response, we propose a hybrid Mean-Field Control (MFC) and Mean-Field Game (MFG) framework for integrating DER aggregators into these markets with a large number of agents. Unlike traditional approaches, our model captures the feedback between aggregators’ strategies and LMPs while enabling decentralized decision-making. This algorithm enables aggregators to maintain beliefs about LMPs at a long-run equilibrium, solve their own Markov decision problems, and determine prosumers' supply or demand bids, which are then aggregated for the wholesale market. System operators continue running multi-settlement systems with economic dispatch. We prove the existence of a mean-field equilibrium (MFE) and provide a reinforcement-learning-based algorithm for prosumers to learn to converge to an MFE with entropy regularization. Numerical results show that LMPs can quickly reach a steady state. Furthermore, numerical comparisons with and without energy storage illustrate that our model can prevent extreme LMP values, fostering a more stable market even with completely decentralized decision-making.

BIOGRAPHY

Jun He is currently working under the mentorship of Professor Andrew Liu. He holds a Bachelor's degree in Computer Engineering and a Master's degree in Economics. Jun has contributed to the development of the open-source Julia package UnitCommitment.jl, a powerful tool designed to address optimization challenges in power grid operations. His research focuses on the intersection of artificial intelligence and energy markets, with particular emphasis on multi-agent reinforcement learning, mean-field theory, game theory, and market equilibrium.