Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Understanding and Improving Knowledge Distillation for Quantization-Aware Training of Large Transformer Encoders

Full metadata record
DC Field Value Language
dc.contributor.authorKim, Minsoo-
dc.contributor.authorLee, Sihwa-
dc.contributor.authorHong, Sukjin-
dc.contributor.authorChang, Du-Seong-
dc.contributor.authorChoi, Jung wook-
dc.date.accessioned2023-05-03T09:39:33Z-
dc.date.available2023-05-03T09:39:33Z-
dc.date.created2023-04-06-
dc.date.issued2022-12-
dc.identifier.urihttps://scholarworks.bwise.kr/hanyang/handle/2021.sw.hanyang/184845-
dc.description.abstractKnowledge distillation (KD) has been a ubiquitous method for model compression to strengthen the capability of a lightweight model with the transferred knowledge from the teacher. In particular, KD has been employed in quantization-aware training (QAT) of Transformer encoders like BERT to improve the accuracy of the student model with the reduced-precision weight parameters. However, little is understood about which of the various KD approaches best fits the QAT of Transformers. In this work, we provide an in-depth analysis of the mechanism of KD on attention recovery of quantized large Transformers. In particular, we reveal that the previously adopted MSE loss on the attention score is insufficient for recovering the self-attention information. Therefore, we propose two KD methods; attention-map and attention-output losses. Furthermore, we explore the unification of both losses to address task-dependent preference between attention-map and output losses. The experimental results on various Transformer encoder models demonstrate that the proposed KD methods achieve state-of-the-art accuracy for QAT with sub-2-bit weight quantization.-
dc.language영어-
dc.language.isoen-
dc.publisherAssociation for Computational Linguistics (ACL)-
dc.titleUnderstanding and Improving Knowledge Distillation for Quantization-Aware Training of Large Transformer Encoders-
dc.typeArticle-
dc.contributor.affiliatedAuthorChoi, Jung wook-
dc.identifier.doi10.48550/arXiv.2211.11014-
dc.identifier.scopusid2-s2.0-85149442817-
dc.identifier.bibliographicCitationProceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022, pp.6713 - 6725-
dc.relation.isPartOfProceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022-
dc.citation.titleProceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022-
dc.citation.startPage6713-
dc.citation.endPage6725-
dc.type.rimsART-
dc.type.docTypeConference Paper-
dc.description.journalClass1-
dc.description.isOpenAccessN-
dc.description.journalRegisteredClassscopus-
dc.subject.keywordPlusComputational linguistics-
dc.subject.keywordPlusPersonnel training-
dc.subject.keywordPlusSignal encoding-
dc.subject.keywordPlusDistillation-
dc.subject.keywordPlusBest fit-
dc.subject.keywordPlusDistillation method-
dc.subject.keywordPlusIn-depth analysis-
dc.subject.keywordPlusModel compression-
dc.subject.keywordPlusQuantisation-
dc.subject.keywordPlusReduced precision-
dc.subject.keywordPlusState of the art-
dc.subject.keywordPlusStudent Modeling-
dc.subject.keywordPlusTeachers&apos-
dc.subject.keywordPlus-
dc.subject.keywordPlusWeight parameters-
dc.identifier.urlhttps://arxiv.org/abs/2211.11014-
Files in This Item
Go to Link
Appears in
Collections
서울 공과대학 > 서울 융합전자공학부 > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Choi, Jung wook photo

Choi, Jung wook
COLLEGE OF ENGINEERING (SCHOOL OF ELECTRONIC ENGINEERING)
Read more

Altmetrics

Total Views & Downloads

BROWSE