Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

A Real-Time Intelligent Speed Optimization Planner Using Reinforcement Learning

Authors
Lee, W.Han, J.Zhang, Y.Karbowski, D.Rousseau, A.Kim, N.
Issue Date
Apr-2021
Publisher
SAE International
Citation
SAE Technical Papers, no.2021
Indexed
SCOPUS
Journal Title
SAE Technical Papers
Number
2021
URI
https://scholarworks.bwise.kr/erica/handle/2021.sw.erica/105804
DOI
10.4271/2021-01-0434
ISSN
0148-7191
Abstract
As connectivity and sensing technologies become more mature, automated vehicles can predict future driving situations and utilize this information to drive more energy-efficiently than human-driven vehicles. However, future information beyond the limited connectivity and sensing range is difficult to predict and utilize, limiting the energy-saving potential of energy-efficient driving. Thus, we combine a conventional speed optimization planner, developed in our previous work, and reinforcement learning to propose a real-time intelligent speed optimization planner for connected and automated vehicles. We briefly summarize the conventional speed optimization planner with limited information, based on closed-form energy-optimal solutions, and present its multiple parameters that determine reference speed trajectories. Then, we use a deep reinforcement learning (DRL) algorithm, such as a deep Q-learning algorithm, to find the policy of how to adjust these parameters in real-time to dynamically changing situations in order to realize the full potential of energy-efficient driving. The model-free DRL algorithm, based on the experience of the system, can learn the optimal policy through iteratively interacting with different driving scenarios without increasing the limited connectivity and sensing range. The training process of the parameter adaptation policy exploits a high-fidelity simulation framework that can simulate multiple vehicles with full powertrain models and the interactions between vehicles and their environment. We consider intersection-approaching scenarios where there is one traffic light with different signal phase and timing setup. Results show that the learned optimal policy enables the proposed intelligent speed optimization planner to properly adjust the parameters in a piecewise constant manner, leading to additional energy savings without increasing total travel time compared to the conventional speed optimization planner. © 2021 SAE International; UChicago Argonne, LLC.
Files in This Item
Go to Link
Appears in
Collections
COLLEGE OF ENGINEERING SCIENCES > DEPARTMENT OF MECHANICAL ENGINEERING > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Kim, Nam wook photo

Kim, Nam wook
ERICA 공학대학 (DEPARTMENT OF MECHANICAL ENGINEERING)
Read more

Altmetrics

Total Views & Downloads

BROWSE