Optimization of a Cluster-Based Energy Management System Using Deep Reinforcement Learning Without Affecting Prosumer Comfort: V2X Technologies and Peer-to-Peer Energy Trading

dc.authorscopusid 58924652000
dc.authorscopusid 55780618800
dc.contributor.author Yavuz, Mete
dc.contributor.author Kivanc, Omer Cihan
dc.date.accessioned 2024-05-25T11:28:11Z
dc.date.available 2024-05-25T11:28:11Z
dc.date.issued 2024
dc.department Okan University en_US
dc.department-temp [Yavuz, Mete; Kivanc, Omer Cihan] Istanbul Okan Univ, Dept Elect & Elect Engn, TR-34959 Istanbul, Turkiye en_US
dc.description.abstract The concept of Prosumer has enabled consumers to actively participate in Peer-to-Peer (P2P) energy trading, particularly as Renewable Energy Source (RES)s and Electric Vehicle (EV)s have become more accessible and cost-effective. In addition to the P2P energy trading, prosumers benefit from the relatively high energy capacity of EVs through the integration of Vehicle-to-X (V2X) technologies, such as Vehicle-to-Home (V2H), Vehicle-to-Load (V2L), and Vehicle-to-Grid (V2G). Optimization of an Energy Management System (EMS) is required to allocate the required energy efficiently within the cluster, due to the complex pricing and energy exchange mechanism of P2P energy trading and multiple EVs with V2X technologies. In this paper, Deep Reinforcement Learning (DRL) based EMS optimization method is proposed to optimize the pricing and energy exchanging mechanisms of the P2P energy trading without affecting the comfort of prosumers. The proposed EMS is applied to a small-scale cluster-based environment, including multiple (6) prosumers, P2P energy trading with novel hybrid pricing and energy exchanging mechanisms, and V2X technologies (V2H, V2L, and V2G) to reduce the overall energy costs and increase the Self-Sufficiency Ratio (SSR)s. Multi Double Deep Q-Network (DDQN) agents based DRL algorithm is implemented and the environment is formulated as a Markov Decision Process (MDP) to optimize the decision-making process. Numerical results show that the proposed EMS reduces the overall energy costs by 19.18%, increases the SSRs by 9.39%, and achieves an overall 65.87% SSR. Additionally, numerical results indicates that model-free DRL, such as DDQN agent based Deep Q-Network (DQN) Reinforcement Learning (RL) algorithm, promise to eliminate the energy management complexities with multiple uncertainties. en_US
dc.identifier.citationcount 0
dc.identifier.doi 10.1109/ACCESS.2024.3370922
dc.identifier.endpage 31575 en_US
dc.identifier.issn 2169-3536
dc.identifier.scopus 2-s2.0-85186983534
dc.identifier.scopusquality Q1
dc.identifier.startpage 31551 en_US
dc.identifier.uri https://doi.org/10.1109/ACCESS.2024.3370922
dc.identifier.uri https://hdl.handle.net/20.500.14517/1137
dc.identifier.volume 12 en_US
dc.identifier.wos WOS:001177024500001
dc.identifier.wosquality Q2
dc.language.iso en
dc.publisher Ieee-inst Electrical Electronics Engineers inc en_US
dc.relation.publicationcategory Makale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanı en_US
dc.rights info:eu-repo/semantics/openAccess en_US
dc.scopus.citedbyCount 6
dc.subject Costs en_US
dc.subject Optimization en_US
dc.subject Energy management en_US
dc.subject Vehicle-to-grid en_US
dc.subject Clustering algorithms en_US
dc.subject Heuristic algorithms en_US
dc.subject Vehicle-to-everything en_US
dc.subject Peer-to-peer computing en_US
dc.subject Energy exchange en_US
dc.subject Reinforcement learning en_US
dc.subject Deep reinforcement learning en_US
dc.subject Smart grids en_US
dc.subject Energy management system en_US
dc.subject peer-to-peer energy trading en_US
dc.subject vehicle-to-home en_US
dc.subject multi-agent reinforcement learning en_US
dc.subject deep reinforcement learning en_US
dc.subject smart grids en_US
dc.title Optimization of a Cluster-Based Energy Management System Using Deep Reinforcement Learning Without Affecting Prosumer Comfort: V2X Technologies and Peer-to-Peer Energy Trading en_US
dc.type Article en_US
dc.wos.citedbyCount 4

Files