Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Analysis of Sub-Routines in NVIDIA cuBLAS Library for a series of Matrix-Matrix Multiplications in Transformer

Authors
Kim, D.Kim, I.Kim, J.
Issue Date
Oct-2022
Publisher
IEEE Computer Society
Keywords
cuBLAS; General Matrix-Matrix Multiplication; GEMM; Multi-Head Attention; MHA; Transformer
Citation
International Conference on ICT Convergence, v.2022-October, pp 618 - 620
Pages
3
Journal Title
International Conference on ICT Convergence
Volume
2022-October
Start Page
618
End Page
620
URI
https://scholarworks.bwise.kr/cau/handle/2019.sw.cau/61186
DOI
10.1109/ICTC55196.2022.9952498
ISSN
2162-1233
Abstract
The general matrix-matrix multiplication (GEMM) is a key operation used in a variety of areas such as Computational Science, Data Science, Machine Learning, and so on. In transformers which are foundation models, Multi-Head Attention (MHA) has a series of matrix-matrix multiplications. To perform the MHA on GPUs, we need to exploit highly optimized sub-routines for GEMM, provided their hardware vendor. On NVIDIA GPUs, the cuBLAS library is provided in order to support basic linear algebra subprograms (BLAS). In this paper, we examine and analyze several sub-routines to handle a series of matrix-matrix multiplications used in the transformer model on NVIDIA GPUs. © 2022 IEEE.
Files in This Item
There are no files associated with this item.
Appears in
Collections
College of Software > School of Computer Science and Engineering > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Kim, Jinsung photo

Kim, Jinsung
소프트웨어대학 (소프트웨어학부)
Read more

Altmetrics

Total Views & Downloads

BROWSE