EARL-Light: An Evolutionary Algorithm-Assisted Reinforcement Learning for Traffic Signal Control
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Chen, Jing-Yuan | - |
dc.contributor.author | Wei, Feng-Feng | - |
dc.contributor.author | Chen, Tai-You | - |
dc.contributor.author | Hu, Xiao-Min | - |
dc.contributor.author | Jeon, Sang-Woon | - |
dc.contributor.author | Wang, Yang | - |
dc.contributor.author | Chen, Wei-Neng | - |
dc.date.accessioned | 2025-06-12T06:33:41Z | - |
dc.date.available | 2025-06-12T06:33:41Z | - |
dc.date.issued | 2025-01 | - |
dc.identifier.issn | 1062-922X | - |
dc.identifier.uri | https://scholarworks.bwise.kr/erica/handle/2021.sw.erica/125556 | - |
dc.description.abstract | Traffic signal control (TSC) problems have received increasing attention with the development of the smart city. Reinforcement learning (RL) models TSC as a Markov decision process and learns the timing relationship of traffic scheduling from massive historical data. Due to the uncertainty and mutability of TSC problems, existing RL methods face bottlenecks in diversity and are easy to be trapped into local optima. To alleviate this predicament, this paper combines evolutionary optimization and RL to propose an evolutionary algorithm-assisted reinforcement learning (EARL-Light) method for TSC problems. EARL-Light is a population-based algorithm, in which one individual represents a policy and a population of individuals are evolved to search for near-optimal policies. The diversified search ability of evolutionary optimization can help the algorithm get rid of local optima for global optimization and the rapid learning based on the gradient of RL can achieve fast convergence. Extensive experiments on seven real-world traffic datasets demonstrates that EARL-Light achieves shorter travel time with fast convergence. © 2024 IEEE. | - |
dc.format.extent | 8 | - |
dc.language | 영어 | - |
dc.language.iso | ENG | - |
dc.publisher | Institute of Electrical and Electronics Engineers Inc. | - |
dc.title | EARL-Light: An Evolutionary Algorithm-Assisted Reinforcement Learning for Traffic Signal Control | - |
dc.type | Article | - |
dc.identifier.doi | 10.1109/SMC54092.2024.10831906 | - |
dc.identifier.scopusid | 2-s2.0-85217879256 | - |
dc.identifier.bibliographicCitation | Conference Proceedings - IEEE International Conference on Systems, Man and Cybernetics, pp 1342 - 1349 | - |
dc.citation.title | Conference Proceedings - IEEE International Conference on Systems, Man and Cybernetics | - |
dc.citation.startPage | 1342 | - |
dc.citation.endPage | 1349 | - |
dc.type.docType | Conference paper | - |
dc.description.isOpenAccess | N | - |
dc.description.journalRegisteredClass | scopus | - |
dc.subject.keywordAuthor | DDQN | - |
dc.subject.keywordAuthor | Genetic algorithm | - |
dc.subject.keywordAuthor | gradient transfer | - |
dc.subject.keywordAuthor | shared experience | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
55 Hanyangdeahak-ro, Sangnok-gu, Ansan, Gyeonggi-do, 15588, Korea+82-31-400-4269 sweetbrain@hanyang.ac.kr
COPYRIGHT © 2021 HANYANG UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.