Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

MaPHeA: A Framework for Lightweight Memory Hierarchy-aware Profile-guided Heap Allocation

Authors
Oh, Deok-JaeMoon, YaebinHam, Do KyuHam, Tae JunPark, YongjunLee, Jae W.Ahn, Jung HoLee, Eojin
Issue Date
Jan-2023
Publisher
ASSOC COMPUTING MACHINERY
Keywords
Profile-guided optimization; heap allocation; heterogeneous memory system; huge page
Citation
ACM TRANSACTIONS ON EMBEDDED COMPUTING SYSTEMS, v.22, no.1, pp.1 - 28
Indexed
SCIE
SCOPUS
Journal Title
ACM TRANSACTIONS ON EMBEDDED COMPUTING SYSTEMS
Volume
22
Number
1
Start Page
1
End Page
28
URI
https://scholarworks.bwise.kr/hanyang/handle/2021.sw.hanyang/182375
DOI
10.1145/3527853
ISSN
1539-9087
Abstract
Hardware performance monitoring units (PMUs) are a standard feature in modern microprocessors, providing a rich set of microarchitectural event samplers. Recently, numerous profile-guided optimization (PGO) frameworks have exploited them to feature much lower profiling overhead compared to conventional instrumentation-based frameworks. However, existing PGO frameworks mainly focus on optimizing the layout of binaries; they overlook rich information provided by the PMU about data access behaviors over the memory hierarchy. Thus, we propose MaPHeA, a lightweight Memory hierarchy-aware Profile-guided Heap Allocation framework applicable to both HPC and embedded systems. MaPHeA guides and applies the optimized allocation of dynamically allocated heap objects with very low profiling overhead and without additional user intervention to improve application performance. To demonstrate the effectiveness of MaPHeA, we apply it to optimizing heap object allocation in an emerging DRAM-NVM heterogeneous memory system (HMS), selective huge-page utilization, and controlling the cacheability of the objects with the low temporal locality. In an HMS, by identifying and placing frequently accessed heap objects to the fast DRAM region, MaPHeA improves the performance of memory-intensive graph-processing and Redis workloads by 56.0% on average over the default configuration that uses DRAM as a hardware-managed cache of slow NVM. By identifying large heap objects that cause frequent TLB misses and allocating them to huge pages, MaPHeA increases the performance of the read and update operations of Redis by 10.6% over the transparent huge-page implementation of Linux. Also, by distinguishing the objects that cause cache pollution due to their low temporal locality and applying write-combining to them, MaPHeA improves the performance of STREAM and RADIX workloads by 20.0% on average over the system without cacheability control.
Files in This Item
Go to Link
Appears in
Collections
서울 공과대학 > 서울 컴퓨터소프트웨어학부 > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Park, Yong jun photo

Park, Yong jun
서울 공과대학 (서울 컴퓨터소프트웨어학부)
Read more

Altmetrics

Total Views & Downloads

BROWSE