A Real-Time Intelligent Energy Management Strategy for Hybrid Electric Vehicles Using Reinforcement Learning
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Lee, Woong | - |
dc.contributor.author | Jeoung, Haeseong | - |
dc.contributor.author | Park, Dohyun | - |
dc.contributor.author | Kim, Tacksu | - |
dc.contributor.author | Lee, Heeyun | - |
dc.contributor.author | Kim, Namwook | - |
dc.date.accessioned | 2021-07-28T08:11:27Z | - |
dc.date.available | 2021-07-28T08:11:27Z | - |
dc.date.issued | 2021-03 | - |
dc.identifier.issn | 2169-3536 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/erica/handle/2021.sw.erica/105788 | - |
dc.description.abstract | Equivalent Consumption Management Strategy (ECMS), a representative energy management strategy for hybrid electric vehicles (HEVs) derived from Pontryagin’s minimum principle, is known to produce a near-optimal solution if the costate or equivalent factor of electric use is appropriately determined according to the driving conditions. One problem when applying the control concept to real-world scenarios is that it is difficult to precisely evaluate the performance of the control parameter before driving is complete, so the costate cannot be determined properly. To address this issue, this study proposes a practical method for estimating an appropriate costate based on Deep Q-Networks (DQNs), which is a reinforcement learning algorithm that uses a Deep Neural Network to evaluate the performances and determine the best control parameter or costate. The control concept benefits vehicle energy management by selecting the control parameter most related to stochastic conditions or future driving information based on artificial intelligence (AI), while optimal control is deterministically conducted by ECMS if the control parameter is given. Simply, only the implicit part of the optimal controller is solved via artificial intelligence. In the simulation results, not only does the proposed control concept outperform an existing ECMS that uses an adaptive technique for determining the costate, but the concept is also very feasible, in that it does not need a model for evaluating the performances. CCBY | - |
dc.format.extent | 10 | - |
dc.language | 영어 | - |
dc.language.iso | ENG | - |
dc.publisher | Institute of Electrical and Electronics Engineers Inc. | - |
dc.title | A Real-Time Intelligent Energy Management Strategy for Hybrid Electric Vehicles Using Reinforcement Learning | - |
dc.type | Article | - |
dc.publisher.location | 미국 | - |
dc.identifier.doi | 10.1109/ACCESS.2021.3079903 | - |
dc.identifier.scopusid | 2-s2.0-85105881140 | - |
dc.identifier.wosid | 000673556500001 | - |
dc.identifier.bibliographicCitation | IEEE Access, v.9, pp 72759 - 72768 | - |
dc.citation.title | IEEE Access | - |
dc.citation.volume | 9 | - |
dc.citation.startPage | 72759 | - |
dc.citation.endPage | 72768 | - |
dc.type.docType | Article | - |
dc.description.isOpenAccess | Y | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalResearchArea | Engineering | - |
dc.relation.journalResearchArea | Telecommunications | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Information Systems | - |
dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
dc.relation.journalWebOfScienceCategory | Telecommunications | - |
dc.subject.keywordPlus | PONTRYAGINS MINIMUM PRINCIPLE | - |
dc.subject.keywordPlus | PMP-BASED CONTROL | - |
dc.subject.keywordPlus | PERFORMANCE | - |
dc.subject.keywordAuthor | Electronic countermeasures | - |
dc.subject.keywordAuthor | Energy management | - |
dc.subject.keywordAuthor | BatteriesReinforcement learning | - |
dc.subject.keywordAuthor | Engines | - |
dc.subject.keywordAuthor | FuelsState of charge | - |
dc.subject.keywordAuthor | Energy management strategy | - |
dc.subject.keywordAuthor | adaptive ECMS | - |
dc.subject.keywordAuthor | machine learning | - |
dc.subject.keywordAuthor | reinforcement learning | - |
dc.subject.keywordAuthor | hybrid electric vehicles | - |
dc.subject.keywordAuthor | deep Q-learning | - |
dc.subject.keywordAuthor | optimal control | - |
dc.identifier.url | https://ieeexplore.ieee.org/document/9430498 | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
55 Hanyangdeahak-ro, Sangnok-gu, Ansan, Gyeonggi-do, 15588, Korea+82-31-400-4269 sweetbrain@hanyang.ac.kr
COPYRIGHT © 2021 HANYANG UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.