A High-Performance Tensorial Evolutionary Computation for Solving Spatial Optimization Problems
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Lei, Si-Chao | - |
dc.contributor.author | Guo, Hong-Shu | - |
dc.contributor.author | Xiao, Xiao-Lin | - |
dc.contributor.author | Gong, Yue-Jiao | - |
dc.contributor.author | Zhang, Jun | - |
dc.date.accessioned | 2024-04-12T05:00:20Z | - |
dc.date.available | 2024-04-12T05:00:20Z | - |
dc.date.issued | 2023-11 | - |
dc.identifier.issn | 1865-0929 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/erica/handle/2021.sw.erica/118720 | - |
dc.description.abstract | As a newly emerged evolutionary algorithm, tensorial evolution (TE) has shown promising performance in solving spatial optimization problems owing to its tensorial representation and tensorial evolutionary patterns. TE algorithm sequentially performed different tensorial evolutionary operations on a single individual or pairs of individuals in a population during iterations. Since tensor algebra considers all dimensions of data simultaneously, TE was explicitly parallel in dimension level. However, it was burdened with intensive tensor calculations especially when encountering large-scale problems. How to extend TE to efficiently solve large-scale problems is one of the most pressing issues currently. Toward this goal, we first devise an efficient TE (ETE) algorithm which expresses all the evolutionary processes in a unified tensorial computational model. Compared to TE, the tensorial evolutionary operations are directly executed on a population rather than sole individuals, enabling ETE to achieve explicit parallel in both dimension and individual levels. To further enhance the computational efficiency of ETE, we leverage the compute unified device architecture (CUDA), which provides access to computational resources on graphics processing units (GPUs). A CUDA-based implementation of ETE (Cu-ETE) is then presented that utilizes GPU to accelerate tensorial evolutionary computation. Notably, Cu-ETE is the first implementation of tensorial evolution on GPU. Experimental results demonstrate the enhanced computational efficiency of both ETE (CPU) and Cu-ETE (GPU) over TE (CPU). By harnessing the power of tensorial algebra and GPU acceleration, Cu-ETE opens up new possibilities for efficient problem-solving in more complex and large-scale problems across various fields of knowledge. © 2024, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. | - |
dc.format.extent | 12 | - |
dc.language | 영어 | - |
dc.language.iso | ENG | - |
dc.publisher | Springer Verlag | - |
dc.title | A High-Performance Tensorial Evolutionary Computation for Solving Spatial Optimization Problems | - |
dc.type | Article | - |
dc.publisher.location | 독일 | - |
dc.identifier.doi | 10.1007/978-981-99-8126-7_27 | - |
dc.identifier.scopusid | 2-s2.0-85178601358 | - |
dc.identifier.bibliographicCitation | Communications in Computer and Information Science, v.1961 CCIS, pp 340 - 351 | - |
dc.citation.title | Communications in Computer and Information Science | - |
dc.citation.volume | 1961 CCIS | - |
dc.citation.startPage | 340 | - |
dc.citation.endPage | 351 | - |
dc.type.docType | Conference paper | - |
dc.description.isOpenAccess | N | - |
dc.description.journalRegisteredClass | scopus | - |
dc.subject.keywordAuthor | Compute Unified Device Architecture (CUDA) | - |
dc.subject.keywordAuthor | Evolutionary computation | - |
dc.subject.keywordAuthor | Graphics Processing Unit (GPU) | - |
dc.subject.keywordAuthor | Tensor algebra | - |
dc.identifier.url | https://link.springer.com/chapter/10.1007/978-981-99-8126-7_27 | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
55 Hanyangdeahak-ro, Sangnok-gu, Ansan, Gyeonggi-do, 15588, Korea+82-31-400-4269 sweetbrain@hanyang.ac.kr
COPYRIGHT © 2021 HANYANG UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.