Modelling building HVAC control strategies using a deep reinforcement learning approach
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Nguyen, Anh Tuan | - |
dc.contributor.author | Pham, Duy Hoang | - |
dc.contributor.author | Oo, Bee Lan | - |
dc.contributor.author | Santamouris, Mattheos | - |
dc.contributor.author | Ahn, Yonghan | - |
dc.contributor.author | Lim, Benson T.H. | - |
dc.date.accessioned | 2024-03-29T07:00:41Z | - |
dc.date.available | 2024-03-29T07:00:41Z | - |
dc.date.issued | 2024-05 | - |
dc.identifier.issn | 0378-7788 | - |
dc.identifier.issn | 1872-6178 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/erica/handle/2021.sw.erica/118268 | - |
dc.description.abstract | Heating, ventilation, and air-conditioning (HVAC) systems are responsible for a considerable proportion of total building energy consumption but are also vital for improved indoor temperature comfort, indoor air quality and well-being of building occupants. Thus, developing control strategies for HVAC systems is critical for the total life cycle of any building projects. Particularly, HVAC and building operations are not stationary but are filled with fuelled by environmental dynamisms and unexpected disruptions such as users' activities, weather conditions, occupancy rate, and operation of machinery and systems. This research aims to develop and propose a strategic control learning framework for HVAC systems using the deep reinforcement learning (DRL) approach. The results show that the proposed Phasic Policy Gradient (PPG) based method is more adaptive to changes in real building's environments. Notably, PPG performs better and more reliable than the conventional method for HVAC control optimization with about 2-14% in energy consumption reduction and indoor temperature comfort enhancement, along with a 66% faster convergence rate. Overall, our findings demonstrate that our proposed DRL approach is less resource intensive and much easier than the conventional approach in deriving solutions for HVAC control optimization driven by energy efficiency and indoor temperature comfort. © 2024 Elsevier B.V. | - |
dc.format.extent | 16 | - |
dc.language | 영어 | - |
dc.language.iso | ENG | - |
dc.publisher | Elsevier Ltd | - |
dc.title | Modelling building HVAC control strategies using a deep reinforcement learning approach | - |
dc.type | Article | - |
dc.publisher.location | 스위스 | - |
dc.identifier.doi | 10.1016/j.enbuild.2024.114065 | - |
dc.identifier.scopusid | 2-s2.0-85187789672 | - |
dc.identifier.wosid | 001206511600001 | - |
dc.identifier.bibliographicCitation | Energy and Buildings, v.310, pp 1 - 16 | - |
dc.citation.title | Energy and Buildings | - |
dc.citation.volume | 310 | - |
dc.citation.startPage | 1 | - |
dc.citation.endPage | 16 | - |
dc.type.docType | Article | - |
dc.description.isOpenAccess | N | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Construction & Building Technology | - |
dc.relation.journalResearchArea | Energy & Fuels | - |
dc.relation.journalResearchArea | Engineering | - |
dc.relation.journalWebOfScienceCategory | Construction & Building Technology | - |
dc.relation.journalWebOfScienceCategory | Energy & Fuels | - |
dc.relation.journalWebOfScienceCategory | Engineering, Civil | - |
dc.subject.keywordPlus | ENERGY-CONSUMPTION | - |
dc.subject.keywordPlus | THERMAL COMFORT | - |
dc.subject.keywordPlus | CONTROL-SYSTEMS | - |
dc.subject.keywordPlus | OPTIMIZATION | - |
dc.subject.keywordPlus | PREDICTION | - |
dc.subject.keywordAuthor | Building energy modelling | - |
dc.subject.keywordAuthor | Constrained learning | - |
dc.subject.keywordAuthor | Deep reinforcement learning | - |
dc.subject.keywordAuthor | Energy efficiency | - |
dc.subject.keywordAuthor | Human comfort | - |
dc.subject.keywordAuthor | HVAC control | - |
dc.identifier.url | https://www.sciencedirect.com/science/article/pii/S0378778824001816 | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
55 Hanyangdeahak-ro, Sangnok-gu, Ansan, Gyeonggi-do, 15588, Korea+82-31-400-4269 sweetbrain@hanyang.ac.kr
COPYRIGHT © 2021 HANYANG UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.