SKMD: Single Kernel on Multiple Devices for Transparent CPU-GPU Collaboration
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Lee, Janghaeng | - |
dc.contributor.author | Samadi, Mehrzad | - |
dc.contributor.author | Park, Yongjun | - |
dc.contributor.author | Mahlke, Scott | - |
dc.date.available | 2020-07-10T07:01:35Z | - |
dc.date.created | 2020-07-06 | - |
dc.date.issued | 2015-09 | - |
dc.identifier.issn | 0734-2071 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/hongik/handle/2020.sw.hongik/9509 | - |
dc.description.abstract | Heterogeneous computing on CPUs and GPUs has traditionally used fixed roles for each device: the GPU handles data parallel work by taking advantage of its massive number of cores while the CPU handles non data-parallel work, such as the sequential code or data transfer management. This work distribution can be a poor solution as it underutilizes the CPU, has difficulty generalizing beyond the single CPU-GPU combination, and may waste a large fraction of time transferring data. Further, CPUs are performance competitive with GPUs on many workloads, thus simply partitioning work based on the fixed roles may be a poor choice. In this article, we present the single-kernel multiple devices (SKMD) system, a framework that transparently orchestrates collaborative execution of a single data-parallel kernel across multiple asymmetric CPUs and GPUs. The programmer is responsible for developing a single data-parallel kernel in OpenCL, while the system automatically partitions the workload across an arbitrary set of devices, generates kernels to execute the partial workloads, and efficiently merges the partial outputs together. The goal is performance improvement by maximally utilizing all available resources to execute the kernel. SKMD handles the difficult challenges of exposed data transfer costs and the performance variations GPUs have with respect to input size. On real hardware, SKMD achieves an average speedup of 28% on a system with one multicore CPU and two asymmetric GPUs compared to a fastest device execution strategy for a set of popular OpenCL kernels. | - |
dc.language | 영어 | - |
dc.language.iso | en | - |
dc.publisher | ASSOC COMPUTING MACHINERY | - |
dc.subject | EFFICIENT | - |
dc.subject | MODEL | - |
dc.title | SKMD: Single Kernel on Multiple Devices for Transparent CPU-GPU Collaboration | - |
dc.type | Article | - |
dc.contributor.affiliatedAuthor | Park, Yongjun | - |
dc.identifier.doi | 10.1145/2798725 | - |
dc.identifier.scopusid | 2-s2.0-84940970801 | - |
dc.identifier.wosid | 000361156500003 | - |
dc.identifier.bibliographicCitation | ACM TRANSACTIONS ON COMPUTER SYSTEMS, v.33, no.3 | - |
dc.relation.isPartOf | ACM TRANSACTIONS ON COMPUTER SYSTEMS | - |
dc.citation.title | ACM TRANSACTIONS ON COMPUTER SYSTEMS | - |
dc.citation.volume | 33 | - |
dc.citation.number | 3 | - |
dc.type.rims | ART | - |
dc.type.docType | Article | - |
dc.description.journalClass | 1 | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Theory & Methods | - |
dc.subject.keywordPlus | EFFICIENT | - |
dc.subject.keywordPlus | MODEL | - |
dc.subject.keywordAuthor | Compiler | - |
dc.subject.keywordAuthor | runtime | - |
dc.subject.keywordAuthor | CPU | - |
dc.subject.keywordAuthor | GPU | - |
dc.subject.keywordAuthor | collaboration | - |
dc.subject.keywordAuthor | optimization | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
94, Wausan-ro, Mapo-gu, Seoul, 04066, Korea02-320-1314
COPYRIGHT 2020 HONGIK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.