Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Analysis of Several Sparse Formats for Matrices used in Sparse-Matrix Dense-Matrix Multiplication for Machine Learning on GPUs

Full metadata record
DC Field Value Language
dc.contributor.authorKim, D.-
dc.contributor.authorKim, J.-
dc.date.accessioned2023-03-08T05:09:48Z-
dc.date.available2023-03-08T05:09:48Z-
dc.date.issued2022-10-
dc.identifier.issn2162-1233-
dc.identifier.urihttps://scholarworks.bwise.kr/cau/handle/2019.sw.cau/61181-
dc.description.abstractSparse-matrix dense-matrix multiplication (SpMM) receives one sparse matrix and one dense matrix as two inputs, and outputs one dense matrix as a result. It plays a vital role in various fields such as deep neural networks graph neural networks and analysis. CUDA, NVIDIA's parallel computing platform, provides cuSPARSE library to support Basic Linear Algebra Subroutines (BLAS) with sparse matrices such as SpMM. In sparse matrices, zero values can be discarded from storage or computations to accelerate execution. In order to represent only non-zero values in sparse matrices, the cuSPARSE library supports several sparse formats for matrices such as COO (COOrdinate), CSR (Compressed Sparse Row), and CSC (Compressed Sparse Column). In addition, since the 3rd Gen. Tensor Cores with Ampere was introduced, CUDA provides cuSPARSELt library for SpMM whose sparse matrix satisfies a 2:4 sparsity pattern, which is approximately 50% sparsity that can occur in machine learning, etc. In this paper, we compare the cuSPARSE library and the cuSPARSELt library for SpMM, in the case of sparse matrices with a 2:4 sparsity pattern(50% sparsity). Furthermore, we compare the performances of three formats to perform SpMM in the cuSPARSE library, in terms of different sparsity such as 75% sparsity, 87.5% sparsity and 99% sparsity. © 2022 IEEE.-
dc.format.extent3-
dc.language영어-
dc.language.isoENG-
dc.publisherIEEE Computer Society-
dc.titleAnalysis of Several Sparse Formats for Matrices used in Sparse-Matrix Dense-Matrix Multiplication for Machine Learning on GPUs-
dc.typeArticle-
dc.identifier.doi10.1109/ICTC55196.2022.9952814-
dc.identifier.bibliographicCitationInternational Conference on ICT Convergence, v.2022-October, pp 629 - 631-
dc.description.isOpenAccessN-
dc.identifier.scopusid2-s2.0-85143250704-
dc.citation.endPage631-
dc.citation.startPage629-
dc.citation.titleInternational Conference on ICT Convergence-
dc.citation.volume2022-October-
dc.type.docTypeConference Paper-
dc.publisher.location미국-
dc.subject.keywordAuthorcuSPARSE-
dc.subject.keywordAuthorcuSPARSELt-
dc.subject.keywordAuthorGPUs-
dc.subject.keywordAuthorMachine Learning-
dc.subject.keywordAuthorSparse-matrix dense-matrix multiplication-
dc.subject.keywordAuthorSpMM-
dc.description.journalRegisteredClassscopus-
Files in This Item
There are no files associated with this item.
Appears in
Collections
College of Software > School of Computer Science and Engineering > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Kim, Jinsung photo

Kim, Jinsung
소프트웨어대학 (소프트웨어학부)
Read more

Altmetrics

Total Views & Downloads

BROWSE