Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

A compiler-based approach for GPGPU performance calibration using TLP modulation (WIP Paper)

Authors
Yu, YongseungKang, SeokwonPark, Yongjun
Issue Date
Jun-2019
Publisher
Association for Computing Machinery
Keywords
Code Instrumentation; GPU; LLVM; Performance Calibration
Citation
Proceedings of the ACM SIGPLAN Conference on Languages, Compilers, and Tools for Embedded Systems (LCTES), pp.193 - 197
Indexed
SCOPUS
Journal Title
Proceedings of the ACM SIGPLAN Conference on Languages, Compilers, and Tools for Embedded Systems (LCTES)
Start Page
193
End Page
197
URI
https://scholarworks.bwise.kr/hanyang/handle/2021.sw.hanyang/147636
DOI
10.1145/3316482.3326343
ISSN
0000-0000
Abstract
Modern GPUs are the most successful accelerators as they provide outstanding performance gain by using CUDA or OpenCL programming models. For maximum performance, programmers typically try to maximize the number of thread blocks of target programs, and GPUs also generally attempt to allocate the maximum number of thread blocks to their GPU cores. However, many recent studies have pointed out that simply allocating the maximum number of thread blocks to GPU cores does not always guarantee the best performance, and identifying proper number of thread blocks per GPU core is a major challenge. Despite these studies, most existing architectural techniques cannot be directly applied to current GPU hardware, and the optimal number of thread blocks can vary significantly depending on the target GPU and application characteristics. To solve these problems, this study proposes a just-in-time thread block number adjustment system using CUDA binary modification upon an LLVM compiler framework, referred to as the CTA Limiter, in order to dynamically maximize GPU performance on real GPUs without reprogramming. The framework gradually reduces the number of concurrent thread blocks of target CUDA workloads using extra shared memory allocation, and compares the execution time with the previous version to automatically identify the optimal number of co-running thread blocks per GPU Core. The results showed meaningful performance improvements, averaging at 30%, 40%, and 44%, in GTX 960, GTX 1050, and GTX 1080 Ti, respectively.
Files in This Item
Go to Link
Appears in
Collections
서울 공과대학 > 서울 컴퓨터소프트웨어학부 > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Park, Yong jun photo

Park, Yong jun
서울 공과대학 (서울 컴퓨터소프트웨어학부)
Read more

Altmetrics

Total Views & Downloads

BROWSE