Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

A Distributed Swarm Optimizer With Adaptive Communication for Large-Scale Optimization

Authors
Yang, QiangChen, Wei-NengGu, TianlongZhang, HuaxiangYuan, HuaqiangKwong, SamZhang, Jun
Issue Date
Jul-2020
Publisher
IEEE Advancing Technology for Humanity
Keywords
Distributed evolutionary algorithms; elite-guided learning (EGL); high-dimensional problems; large-scale optimization; particle swarm optimization (PSO)
Citation
IEEE Transactions on Cybernetics, v.50, no.7, pp 3393 - 3408
Pages
16
Indexed
SCIE
SCOPUS
Journal Title
IEEE Transactions on Cybernetics
Volume
50
Number
7
Start Page
3393
End Page
3408
URI
https://scholarworks.bwise.kr/erica/handle/2021.sw.erica/115419
DOI
10.1109/TCYB.2019.2904543
ISSN
2168-2267
2168-2275
Abstract
Large-scale optimization with high dimensionality and high computational cost becomes ubiquitous nowadays. To tackle such challenging problems efficiently, devising distributed evolutionary computation algorithms is imperative. To this end, this paper proposes a distributed swarm optimizer based on a special master-slave model. Specifically, in this distributed optimizer, the master is mainly responsible for communication with slaves, while each slave iterates a swarm to traverse the solution space. An asynchronous and adaptive communication strategy based on the request-response mechanism is especially devised to let the slaves communicate with the master efficiently. Particularly, the communication between the master and each slave is adaptively triggered during the iteration. To aid the slaves to search the space efficiently, an elite-guided learning strategy is especially designed via utilizing elite particles in the current swarm and historically best solutions found by different slaves to guide the update of particles. Together, this distributed optimizer asynchronously iterates multiple swarms to collaboratively seek the optimum in parallel. Extensive experiments on a widely used large-scale benchmark set substantiate that the distributed optimizer could: 1) achieve competitive effectiveness in terms of solution quality as compared to the state-of-the-art large-scale methods; 2) accelerate the execution of the algorithm in comparison with the sequential one and obtain almost linear speedup as the number of cores increases; and 3) preserve a good scalability to solve higher dimensional problems. © 2013 IEEE.
Files in This Item
Go to Link
Appears in
Collections
COLLEGE OF ENGINEERING SCIENCES > SCHOOL OF ELECTRICAL ENGINEERING > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher ZHANG, Jun photo

ZHANG, Jun
ERICA 공학대학 (SCHOOL OF ELECTRICAL ENGINEERING)
Read more

Altmetrics

Total Views & Downloads

BROWSE