Optimization of a Cluster-Based Energy Management System Using Deep Reinforcement Learning Without Affecting Prosumer Comfort: V2X Technologies and Peer-to-Peer Energy Trading

No Thumbnail Available

Date

2024

Journal Title

Journal ISSN

Volume Title

Publisher

Ieee-inst Electrical Electronics Engineers inc

Open Access Color

OpenAIRE Downloads

OpenAIRE Views

Research Projects

Organizational Units

Journal Issue

Abstract

The concept of Prosumer has enabled consumers to actively participate in Peer-to-Peer (P2P) energy trading, particularly as Renewable Energy Source (RES)s and Electric Vehicle (EV)s have become more accessible and cost-effective. In addition to the P2P energy trading, prosumers benefit from the relatively high energy capacity of EVs through the integration of Vehicle-to-X (V2X) technologies, such as Vehicle-to-Home (V2H), Vehicle-to-Load (V2L), and Vehicle-to-Grid (V2G). Optimization of an Energy Management System (EMS) is required to allocate the required energy efficiently within the cluster, due to the complex pricing and energy exchange mechanism of P2P energy trading and multiple EVs with V2X technologies. In this paper, Deep Reinforcement Learning (DRL) based EMS optimization method is proposed to optimize the pricing and energy exchanging mechanisms of the P2P energy trading without affecting the comfort of prosumers. The proposed EMS is applied to a small-scale cluster-based environment, including multiple (6) prosumers, P2P energy trading with novel hybrid pricing and energy exchanging mechanisms, and V2X technologies (V2H, V2L, and V2G) to reduce the overall energy costs and increase the Self-Sufficiency Ratio (SSR)s. Multi Double Deep Q-Network (DDQN) agents based DRL algorithm is implemented and the environment is formulated as a Markov Decision Process (MDP) to optimize the decision-making process. Numerical results show that the proposed EMS reduces the overall energy costs by 19.18%, increases the SSRs by 9.39%, and achieves an overall 65.87% SSR. Additionally, numerical results indicates that model-free DRL, such as DDQN agent based Deep Q-Network (DQN) Reinforcement Learning (RL) algorithm, promise to eliminate the energy management complexities with multiple uncertainties.

Description

Keywords

Costs, Optimization, Energy management, Vehicle-to-grid, Clustering algorithms, Heuristic algorithms, Vehicle-to-everything, Peer-to-peer computing, Energy exchange, Reinforcement learning, Deep reinforcement learning, Smart grids, Energy management system, peer-to-peer energy trading, vehicle-to-home, multi-agent reinforcement learning, deep reinforcement learning, smart grids

Turkish CoHE Thesis Center URL

Fields of Science

Citation

0

WoS Q

Q2

Scopus Q

Q1

Source

Volume

12

Issue

Start Page

31551

End Page

31575