Energy efficient speed planning of electric vehicles for car-following scenario using model-based reinforcement learning
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Lee, Heeyun | - |
dc.contributor.author | Kim, Kyunghyun | - |
dc.contributor.author | Kim, Namwook | - |
dc.contributor.author | Cha, Suk Won | - |
dc.date.accessioned | 2023-02-21T05:38:15Z | - |
dc.date.available | 2023-02-21T05:38:15Z | - |
dc.date.issued | 2022-05 | - |
dc.identifier.issn | 0306-2619 | - |
dc.identifier.issn | 1872-9118 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/erica/handle/2021.sw.erica/111522 | - |
dc.description.abstract | Eco-driving is a term used to refer to a strategy for operating vehicles so as to minimize energy consumption. Without any hardware changes, eco-driving is an effective approach to improving vehicle efficiency by optimizing driving behavior, particularly for autonomous vehicles. Several approaches have been proposed for eco-driving, such as dynamic programming, Pontryagin's minimum principle, and model predictive control; how-ever, it is difficult to control the speed of the vehicle optimally in various driving situations. This study aims to derive an eco-driving strategy for reducing the energy consumption of a vehicle in diverse driving situations, including road slopes and car-following scenarios. A reinforcement learning-based energy efficient speed planning strategy is proposed for autonomous electric vehicles, which learn an optimal control policy through a data-driven learning process. A model-based reinforcement learning algorithm is developed for the eco-driving strategy; based on domain knowledge of the vehicle powertrain, a battery energy consumption model and longitudinal dynamics model of the vehicle are approximated from the driving data and are used for reinforcement learning. The proposed algorithm is tested using a vehicle simulation, and is compared to a global optimal solution obtained using an exact dynamic programming method. The simulation results show that the reinforcement learning algorithm can adjust the speed of the vehicle by considering driving conditions such as the road slope and a safe distance from the leading vehicle while minimizing energy consumption. The reinforcement learning algorithm achieves a near-optimal performance of 93.8% relative to the dynamic programming result. | - |
dc.format.extent | 12 | - |
dc.language | 영어 | - |
dc.language.iso | ENG | - |
dc.publisher | Pergamon Press Ltd. | - |
dc.title | Energy efficient speed planning of electric vehicles for car-following scenario using model-based reinforcement learning | - |
dc.type | Article | - |
dc.publisher.location | 영국 | - |
dc.identifier.doi | 10.1016/j.apenergy.2021.118460 | - |
dc.identifier.scopusid | 2-s2.0-85126550643 | - |
dc.identifier.wosid | 000793751900001 | - |
dc.identifier.bibliographicCitation | Applied Energy, v.313, pp 1 - 12 | - |
dc.citation.title | Applied Energy | - |
dc.citation.volume | 313 | - |
dc.citation.startPage | 1 | - |
dc.citation.endPage | 12 | - |
dc.type.docType | Article | - |
dc.description.isOpenAccess | N | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Energy & Fuels | - |
dc.relation.journalResearchArea | Engineering | - |
dc.relation.journalWebOfScienceCategory | Energy & Fuels | - |
dc.relation.journalWebOfScienceCategory | Engineering, Chemical | - |
dc.subject.keywordPlus | FUEL EFFICIENT | - |
dc.subject.keywordPlus | OPTIMIZATION | - |
dc.subject.keywordPlus | MANAGEMENT | - |
dc.subject.keywordPlus | ECONOMY | - |
dc.subject.keywordPlus | SAFE | - |
dc.subject.keywordAuthor | Eco-driving | - |
dc.subject.keywordAuthor | Electric vehicle | - |
dc.subject.keywordAuthor | Optimal control | - |
dc.subject.keywordAuthor | Reinforcement learning | - |
dc.identifier.url | https://www.sciencedirect.com/science/article/pii/S0306261921016858?via%3Dihub | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
55 Hanyangdeahak-ro, Sangnok-gu, Ansan, Gyeonggi-do, 15588, Korea+82-31-400-4269 sweetbrain@hanyang.ac.kr
COPYRIGHT © 2021 HANYANG UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.