Supervised-learning-based hour-ahead demand response for a behavior-based home energy management system approximating MILP optimization
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Huy Truong Dinh | - |
dc.contributor.author | Lee, Kyu-haeng | - |
dc.contributor.author | Kim, Daehee | - |
dc.date.accessioned | 2022-08-16T01:40:09Z | - |
dc.date.available | 2022-08-16T01:40:09Z | - |
dc.date.issued | 2022-09 | - |
dc.identifier.issn | 0306-2619 | - |
dc.identifier.issn | 1872-9118 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/sch/handle/2021.sw.sch/21317 | - |
dc.description.abstract | The demand response (DR) program of a traditional home energy management system (HEMS) usually controls or schedules appliances to monitor energy usage, minimize energy cost, and maximize user comfort. In this study, instead of interfering with appliances and changing residents' behavior, the proposed hour-ahead DR strategy first learns the appliance usage behavior of residents; subsequently, based on this knowledge, it silently controls the energy storage system (ESS) and renewable energy system (RES) to minimize the daily energy cost. To accomplish the goal, the proposed deep neural networks (DNNs) of this DR approximate the MILP optimization using supervised learning. The training datasets are created from the optimal outputs of an MILP solver using historical data. After training, in each time slot, these DNNs are used to control the ESS and RES using the real-time data of the surrounding environment. For comparison, we develop two different strategies, namely, the mull-agent reinforcement learning-based strategy, which is an hour-ahead strategy, and the forecast-based MILP strategy, which is a day-ahead strategy. For evaluation and verification, the proposed approaches are applied to three different real-world homes with real-world real-time global horizontal irradiation and prices. Numerical results verify the effectiveness and superiority of the proposed MILP-based supervised learning strategy, in terms of the daily energy cost. | - |
dc.format.extent | 17 | - |
dc.language | 영어 | - |
dc.language.iso | ENG | - |
dc.publisher | Pergamon Press Ltd. | - |
dc.title | Supervised-learning-based hour-ahead demand response for a behavior-based home energy management system approximating MILP optimization | - |
dc.type | Article | - |
dc.publisher.location | 영국 | - |
dc.identifier.doi | 10.1016/j.apenergy.2022.119382 | - |
dc.identifier.scopusid | 2-s2.0-85132238913 | - |
dc.identifier.wosid | 000833369400005 | - |
dc.identifier.bibliographicCitation | Applied Energy, v.321, no.0, pp 1 - 17 | - |
dc.citation.title | Applied Energy | - |
dc.citation.volume | 321 | - |
dc.citation.number | 0 | - |
dc.citation.startPage | 1 | - |
dc.citation.endPage | 17 | - |
dc.type.docType | Article | - |
dc.description.isOpenAccess | N | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Energy & Fuels | - |
dc.relation.journalResearchArea | Engineering | - |
dc.relation.journalWebOfScienceCategory | Energy & Fuels | - |
dc.relation.journalWebOfScienceCategory | Engineering, Chemical | - |
dc.subject.keywordPlus | COMFORT | - |
dc.subject.keywordAuthor | Behavior-based HEMS | - |
dc.subject.keywordAuthor | MILP | - |
dc.subject.keywordAuthor | Supervised learning | - |
dc.subject.keywordAuthor | Deep reinforcement learning | - |
dc.subject.keywordAuthor | Demand response | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
(31538) 22, Soonchunhyang-ro, Asan-si, Chungcheongnam-do, Republic of Korea+82-41-530-1114
COPYRIGHT 2021 by SOONCHUNHYANG UNIVERSITY ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.