ELF: Maximizing memory-level parallelism for GPUs with coordinated warp and fetch scheduling
- Authors
- Park; J.J.K.; Park, Yongjun; Y.; Mahlke; S.
- Issue Date
- 2015
- Publisher
- ASSOC COMPUTING MACHINERY
- Keywords
- Graphics Processing Unit; Compiler; Memory-level Parallelism; Warp Scheduling
- Citation
- International Conference for High Performance Computing, Networking, Storage and Analysis, SC, v.15-20-November-2015
- Journal Title
- International Conference for High Performance Computing, Networking, Storage and Analysis, SC
- Volume
- 15-20-November-2015
- URI
- https://scholarworks.bwise.kr/hongik/handle/2020.sw.hongik/13851
- DOI
- 10.1145/2807591.2807598
- Abstract
- Graphics processing units (GPUs) are increasingly utilized as throughput engines in the modern computer systems. GPUs rely on fast context switching between thousands of threads to hide long latency operations, however, they still stall due to the memory operations. To minimize the stalls, memory operations should be overlapped with other operations as much as possible to maximize memory-level parallelism (MLP). In this paper, we propose Earliest Load First (ELF) warp scheduling, which maximizes the MLP by giving higher priority to the warps that have the fewest instructions to the next memory load. ELF utilizes the same warp priority for the fetch scheduling so that both are coordinated. We also show that ELF reveals its full benefits when there are fewer memory conflicts and fetch stalls. Evaluations show that ELF can improve the performance by 4.1% and achieve total improvement of 11.9% when used with other techniques over commonly-used greedy-then-oldest scheduling.
- Files in This Item
- There are no files associated with this item.
- Appears in
Collections - College of Engineering > School of Electronic & Electrical Engineering > 1. Journal Articles
![qrcode](https://api.qrserver.com/v1/create-qr-code/?size=55x55&data=https://scholarworks.bwise.kr/hongik/handle/2020.sw.hongik/13851)
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.