A Dynamic pricing demand response algorithm for smart grid: Reinforcement learning approach
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Lu, Renzhi | - |
dc.contributor.author | Hong, Seung Ho | - |
dc.contributor.author | Zhang, Xiongfeng | - |
dc.date.accessioned | 2021-06-22T11:43:19Z | - |
dc.date.available | 2021-06-22T11:43:19Z | - |
dc.date.created | 2021-01-21 | - |
dc.date.issued | 2018-06 | - |
dc.identifier.issn | 0306-2619 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/erica/handle/2021.sw.erica/5842 | - |
dc.description.abstract | With the modern advanced information and communication technologies in smart grid systems, demand response (DR) has become an effective method for improving grid reliability and reducing energy costs due to the ability to react quickly to supply-demand mismatches by adjusting flexible loads on the demand side. This paper proposes a dynamic pricing DR algorithm for energy management in a hierarchical electricity market that considers both service provider's (SP) profit and customers' (CUs) costs. Reinforcement learning (RL) is used to illustrate the hierarchical decision-making framework, in which the dynamic pricing problem is formulated as a discrete finite Markov decision process (MDP), and Q-learning is adopted to solve this decision-making problem. Using RL, the SP can adaptively decide the retail electricity price during the on-line learning process where the uncertainty of CUs' load demand profiles and the flexibility of wholesale electricity prices are addressed. Simulation results show that this proposed DR algorithm, can promote SP profitability, reduce energy costs for CUs, balance energy supply and demand in the electricity market, and improve the reliability of electric power systems, which can be regarded as a win-win strategy for both SP and CUs. | - |
dc.language | 영어 | - |
dc.language.iso | en | - |
dc.publisher | ELSEVIER SCI LTD | - |
dc.title | A Dynamic pricing demand response algorithm for smart grid: Reinforcement learning approach | - |
dc.type | Article | - |
dc.contributor.affiliatedAuthor | Hong, Seung Ho | - |
dc.identifier.doi | 10.1016/j.apenergy.2018.03.072 | - |
dc.identifier.scopusid | 2-s2.0-85044607178 | - |
dc.identifier.wosid | 000432884500019 | - |
dc.identifier.bibliographicCitation | APPLIED ENERGY, v.220, pp.220 - 230 | - |
dc.relation.isPartOf | APPLIED ENERGY | - |
dc.citation.title | APPLIED ENERGY | - |
dc.citation.volume | 220 | - |
dc.citation.startPage | 220 | - |
dc.citation.endPage | 230 | - |
dc.type.rims | ART | - |
dc.type.docType | Article | - |
dc.description.journalClass | 1 | - |
dc.description.isOpenAccess | N | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Energy & Fuels | - |
dc.relation.journalResearchArea | Engineering | - |
dc.relation.journalWebOfScienceCategory | Energy & Fuels | - |
dc.relation.journalWebOfScienceCategory | Engineering, Chemical | - |
dc.subject.keywordPlus | ENERGY MANAGEMENT SCHEME | - |
dc.subject.keywordPlus | ELECTRICITY DEMAND | - |
dc.subject.keywordPlus | ENVIRONMENT | - |
dc.subject.keywordPlus | MARKET | - |
dc.subject.keywordPlus | LOADS | - |
dc.subject.keywordPlus | MODEL | - |
dc.subject.keywordPlus | HOME | - |
dc.subject.keywordAuthor | Demand response | - |
dc.subject.keywordAuthor | Dynamic pricing | - |
dc.subject.keywordAuthor | Artificial intelligence | - |
dc.subject.keywordAuthor | Reinforcement learning | - |
dc.subject.keywordAuthor | Markov decision process | - |
dc.subject.keywordAuthor | Q-learning | - |
dc.identifier.url | https://www.sciencedirect.com/science/article/pii/S0306261918304112?via%3Dihub | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
55 Hanyangdeahak-ro, Sangnok-gu, Ansan, Gyeonggi-do, 15588, Korea+82-31-400-4269 sweetbrain@hanyang.ac.kr
COPYRIGHT © 2021 HANYANG UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.