Demand Response Management for Industrial Facilities: A Deep Reinforcement Learning Approach
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Huang, Xuefei | - |
dc.contributor.author | Hong, Seung Ho | - |
dc.contributor.author | Yu, Mengmeng | - |
dc.contributor.author | Ding, Yuemin | - |
dc.contributor.author | Jiang, Junhui | - |
dc.date.accessioned | 2021-06-22T10:01:03Z | - |
dc.date.available | 2021-06-22T10:01:03Z | - |
dc.date.issued | 2019-06 | - |
dc.identifier.issn | 2169-3536 | - |
dc.identifier.issn | 2169-3536 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/erica/handle/2021.sw.erica/2771 | - |
dc.description.abstract | As a major consumer of energy, the industrial sector must assume the responsibility for improving energy efficiency and reducing carbon emissions. However, most existing studies on industrial energy management are suffering from modeling complex industrial processes. To address this issue, a model-free demand response (DR) scheme for industrial facilities was developed. In practical terms, we first formulated the Markov decision process (MDP) for industrial DR, which presents the composition of the state, action, and reward function in detail. Then, we designed an actor-critic-based deep reinforcement learning algorithm to determine the optimal energy management policy, where both the actor (Policy) and the critic (Value function) are implemented by the deep neural network. We then confirmed the validity of our scheme by applying it to a real-world industry. Our algorithm identified an optimal energy consumption schedule, reducing energy costs without compromising production. | - |
dc.format.extent | 12 | - |
dc.language | 영어 | - |
dc.language.iso | ENG | - |
dc.publisher | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC | - |
dc.title | Demand Response Management for Industrial Facilities: A Deep Reinforcement Learning Approach | - |
dc.type | Article | - |
dc.publisher.location | 미국 | - |
dc.identifier.doi | 10.1109/ACCESS.2019.2924030 | - |
dc.identifier.scopusid | 2-s2.0-85068704257 | - |
dc.identifier.wosid | 000475322600001 | - |
dc.identifier.bibliographicCitation | IEEE ACCESS, v.7, pp 82194 - 82205 | - |
dc.citation.title | IEEE ACCESS | - |
dc.citation.volume | 7 | - |
dc.citation.startPage | 82194 | - |
dc.citation.endPage | 82205 | - |
dc.type.docType | Article | - |
dc.description.isOpenAccess | N | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalResearchArea | Engineering | - |
dc.relation.journalResearchArea | Telecommunications | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Information Systems | - |
dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
dc.relation.journalWebOfScienceCategory | Telecommunications | - |
dc.subject.keywordPlus | ENERGY MANAGEMENT | - |
dc.subject.keywordPlus | RESOURCE | - |
dc.subject.keywordAuthor | Artificial intelligence | - |
dc.subject.keywordAuthor | deep reinforcement learning | - |
dc.subject.keywordAuthor | demand response (DR) | - |
dc.subject.keywordAuthor | industrial facilities | - |
dc.subject.keywordAuthor | actor-critic | - |
dc.identifier.url | https://ieeexplore.ieee.org/document/8742652 | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
55 Hanyangdeahak-ro, Sangnok-gu, Ansan, Gyeonggi-do, 15588, Korea+82-31-400-4269 sweetbrain@hanyang.ac.kr
COPYRIGHT © 2021 HANYANG UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.