Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Adaptive Granularity Learning Distributed Particle Swarm Optimization for Large-Scale Optimization

Authors
Wang, Zi-JiaZhan, Zhi-HuiKwong, SamJin, HuZhang, Jun
Issue Date
Mar-2021
Publisher
IEEE Advancing Technology for Humanity
Keywords
Adaptive granularity learning distributed particle swarm optimization (AGLDPSO); large-scale optimization; locality-sensitive hashing (LSH); logistic regression (LR); master& #8211; slave multisubpopulation distributed
Citation
IEEE Transactions on Cybernetics, v.51, no.3, pp.1175 - 1188
Indexed
SCIE
SCOPUS
Journal Title
IEEE Transactions on Cybernetics
Volume
51
Number
3
Start Page
1175
End Page
1188
URI
https://scholarworks.bwise.kr/erica/handle/2021.sw.erica/445
DOI
10.1109/TCYB.2020.2977956
ISSN
2168-2267
Abstract
Large-scale optimization has become a significant and challenging research topic in the evolutionary computation (EC) community. Although many improved EC algorithms have been proposed for large-scale optimization, the slow convergence in the huge search space and the trap into local optima among massive suboptima are still the challenges. Targeted to these two issues, this article proposes an adaptive granularity learning distributed particle swarm optimization (AGLDPSO) with the help of machine-learning techniques, including clustering analysis based on locality-sensitive hashing (LSH) and adaptive granularity control based on logistic regression (LR). In AGLDPSO, a master-slave multisubpopulation distributed model is adopted, where the entire population is divided into multiple subpopulations, and these subpopulations are co-evolved. Compared with other large-scale optimization algorithms with single population evolution or centralized mechanism, the multisubpopulation distributed co-evolution mechanism will fully exchange the evolutionary information among different subpopulations to further enhance the population diversity. Furthermore, we propose an adaptive granularity learning strategy (AGLS) based on LSH and LR. The AGLS is helpful to determine an appropriate subpopulation size to control the learning granularity of the distributed subpopulations in different evolutionary states to balance the exploration ability for escaping from massive suboptima and the exploitation ability for converging in the huge search space. The experimental results show that AGLDPSO performs better than or at least comparable with some other state-of-the-art large-scale optimization algorithms, even the winner of the competition on large-scale optimization, on all the 35 benchmark functions from both IEEE Congress on Evolutionary Computation (IEEE CEC2010) and IEEE CEC2013 large-scale optimization test suites.
Files in This Item
Go to Link
Appears in
Collections
COLLEGE OF ENGINEERING SCIENCES > SCHOOL OF ELECTRICAL ENGINEERING > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher JIN, HU photo

JIN, HU
ERICA 공학대학 (SCHOOL OF ELECTRICAL ENGINEERING)
Read more

Altmetrics

Total Views & Downloads

BROWSE