Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Deep Reinforcement Learning for Dynamic Algorithm Selection: A Proof-of-Principle Study on Differential Evolution

Full metadata record
DC Field Value Language
dc.contributor.authorGuo, Hongshu-
dc.contributor.authorMa, Yining-
dc.contributor.authorMa, Zeyuan-
dc.contributor.authorChen, Jiacheng-
dc.contributor.authorZhang, Xinglin-
dc.contributor.authorCao, Zhiguang-
dc.contributor.authorZhang, Jun-
dc.contributor.authorGong, Yue-Jiao-
dc.date.accessioned2024-05-01T06:00:28Z-
dc.date.available2024-05-01T06:00:28Z-
dc.date.issued2024-04-
dc.identifier.issn2168-2216-
dc.identifier.issn2168-2232-
dc.identifier.urihttps://scholarworks.bwise.kr/erica/handle/2021.sw.erica/118903-
dc.description.abstractEvolutionary algorithms, such as differential evolution, excel in solving real-parameter optimization challenges. However, the effectiveness of a single algorithm varies across different problem instances, necessitating considerable efforts in algorithm selection or configuration. This article aims to address the limitation by leveraging the complementary strengths of a group of algorithms and dynamically scheduling them throughout the optimization progress for specific problems. We propose a deep reinforcement learning-based dynamic algorithm selection framework to accomplish this task. Our approach models the dynamic algorithm selection a Markov decision process, training an agent in a policy gradient manner to select the most suitable algorithm according to the features observed during the optimization process. To empower the agent with the necessary information, our framework incorporates a thoughtful design of landscape and algorithmic features. Meanwhile, we employ a sophisticated deep neural network model to infer the optimal action, ensuring informed algorithm selections. Additionally, an algorithm context restoration mechanism is embedded to facilitate smooth switching among different algorithms. These mechanisms together enable our framework to seamlessly select and switch algorithms in a dynamic online fashion. Notably, the proposed framework is simple and generic, offering potential improvements across a broad spectrum of evolutionary algorithms. As a proof-of-principle study, we apply this framework to a group of differential evolution algorithms. The experimental results showcase the remarkable effectiveness of the proposed framework, not only enhancingthe overall optimization performance but also demonstrating favorable generalization ability across different problem classes.-
dc.format.extent19-
dc.language영어-
dc.language.isoENG-
dc.publisherIEEE Advancing Technology for Humanity-
dc.titleDeep Reinforcement Learning for Dynamic Algorithm Selection: A Proof-of-Principle Study on Differential Evolution-
dc.typeArticle-
dc.publisher.location미국-
dc.identifier.doi10.1109/TSMC.2024.3374889-
dc.identifier.scopusid2-s2.0-85190357231-
dc.identifier.wosid001205842700001-
dc.identifier.bibliographicCitationIEEE Transactions on Systems, Man, and Cybernetics: Systems, pp 1 - 19-
dc.citation.titleIEEE Transactions on Systems, Man, and Cybernetics: Systems-
dc.citation.startPage1-
dc.citation.endPage19-
dc.type.docTypeArticle; Early Access-
dc.description.isOpenAccessN-
dc.description.journalRegisteredClassscie-
dc.description.journalRegisteredClassscopus-
dc.relation.journalResearchAreaAutomation & Control Systems-
dc.relation.journalResearchAreaComputer Science-
dc.relation.journalWebOfScienceCategoryAutomation & Control Systems-
dc.relation.journalWebOfScienceCategoryComputer Science, Cybernetics-
dc.subject.keywordPlusCONFIGURATION-
dc.subject.keywordPlusOPTIMIZATION-
dc.subject.keywordPlusPARAMETERS-
dc.subject.keywordPlusENSEMBLE-
dc.subject.keywordAuthorHeuristic algorithms-
dc.subject.keywordAuthorOptimization-
dc.subject.keywordAuthorStatistics-
dc.subject.keywordAuthorSociology-
dc.subject.keywordAuthorTrajectory-
dc.subject.keywordAuthorSwitches-
dc.subject.keywordAuthorSearch problems-
dc.subject.keywordAuthorAlgorithm selection-
dc.subject.keywordAuthorblack-box optimization-
dc.subject.keywordAuthordeep reinforcement learning-
dc.subject.keywordAuthordifferential evolution-
dc.subject.keywordAuthormeta-black-box optimization-
dc.identifier.urlhttps://arxiv.org/abs/2403.02131-
Files in This Item
Go to Link
Appears in
Collections
COLLEGE OF ENGINEERING SCIENCES > SCHOOL OF ELECTRICAL ENGINEERING > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher ZHANG, Jun photo

ZHANG, Jun
ERICA 공학대학 (SCHOOL OF ELECTRICAL ENGINEERING)
Read more

Altmetrics

Total Views & Downloads

BROWSE