Detailed Information

Cited 0 time in webofscience Cited 3 time in scopus
Metadata Downloads

Fine Grain Cache Partitioning Using Per-Instruction Working Blocks

Full metadata record
DC Field Value Language
dc.contributor.authorPark-
dc.contributor.authorJ.J.K.-
dc.contributor.authorPark, Yongjun-
dc.contributor.authorY.-
dc.contributor.authorMahlke-
dc.contributor.authorS.-
dc.date.available2021-03-17T10:45:35Z-
dc.date.created2021-02-26-
dc.date.issued2015-
dc.identifier.issn1089-795X-
dc.identifier.urihttps://scholarworks.bwise.kr/hongik/handle/2020.sw.hongik/13777-
dc.description.abstractA traditional least-recently used (LRU) cache replacement policy fails to achieve the performance of the optimal replacement policy when cache blocks with diverse reuse characteristics interfere with each other. When multiple applications share a cache, it is often partitioned among the applications because cache blocks show similar reuse characteristics within each application. In this paper, we extend the idea to a single application by viewing a cache as a shared resource between individual memory instructions. To that end, we propose Instruction-based LRU (ILRU), a fine grain cache partitioning that way-partitions individual cache sets based on per-instruction working blocks, which are cache blocks required by an instruction to satisfy all the reuses within a set. In ILRU, a memory instruction steals a block from another only when it requires more blocks than it currently has. Otherwise, a memory instruction victimizes among the cache blocks inserted by itself. Experiments show that ILRU can improve the cache performance in all levels of cache, reducing the number of misses by an average of 7.0% for L1, 9.1% for L2, and 8.7% for L3, which results in a geometric mean performance improvement of 5.3%. ILRU for a three-level cache hierarchy imposes a modest 1.3% storage overhead over the total cache size.-
dc.publisherIEEE COMPUTER SOC-
dc.titleFine Grain Cache Partitioning Using Per-Instruction Working Blocks-
dc.typeArticle-
dc.contributor.affiliatedAuthorPark, Yongjun-
dc.identifier.doi10.1109/PACT.2015.11-
dc.identifier.scopusid2-s2.0-84975474764-
dc.identifier.wosid000378942700026-
dc.identifier.bibliographicCitationParallel Architectures and Compilation Techniques - Conference Proceedings, PACT, pp.305 - 316-
dc.relation.isPartOfParallel Architectures and Compilation Techniques - Conference Proceedings, PACT-
dc.citation.titleParallel Architectures and Compilation Techniques - Conference Proceedings, PACT-
dc.citation.startPage305-
dc.citation.endPage316-
dc.type.rimsART-
dc.type.docTypeProceedings Paper-
dc.description.journalClass1-
dc.description.journalRegisteredClassscopus-
dc.relation.journalResearchAreaComputer Science-
dc.relation.journalResearchAreaEngineering-
dc.relation.journalWebOfScienceCategoryComputer Science, Hardware & Architecture-
dc.relation.journalWebOfScienceCategoryComputer Science, Theory & Methods-
dc.relation.journalWebOfScienceCategoryEngineering, Electrical & Electronic-
dc.subject.keywordPlusHIGH-PERFORMANCE-
dc.subject.keywordPlusREPLACEMENT-
dc.subject.keywordPlusPREDICTION-
dc.subject.keywordPlusPOLICIES-
Files in This Item
There are no files associated with this item.
Appears in
Collections
College of Engineering > School of Electronic & Electrical Engineering > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Altmetrics

Total Views & Downloads

BROWSE