KNOWLEDGE DISTILLATION FROM LANGUAGE MODEL TO ACOUSTIC MODEL: A HIERARCHICAL MULTI-TASK LEARNING APPROACH
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Lee, Mun-Hak | - |
dc.contributor.author | Chang, Joon-Hyuk | - |
dc.date.accessioned | 2023-02-21T05:29:09Z | - |
dc.date.available | 2023-02-21T05:29:09Z | - |
dc.date.created | 2023-02-08 | - |
dc.date.issued | 2022-05 | - |
dc.identifier.issn | 0736-7791 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/hanyang/handle/2021.sw.hanyang/182326 | - |
dc.description.abstract | The remarkable performance of the pre-trained language model (LM) using self-supervised learning has led to a major paradigm shift in the study of natural language processing. In line with these changes, leveraging the performance of speech recognition systems with massive deep learning-based LMs is a major topic of speech recognition research. Among the various methods of applying LMs to speech recognition systems, in this paper, we focus on a cross-modal knowledge distillation method that transfers knowledge between two types of deep neural networks with different modalities. We propose an acoustic model structure with multiple auxiliary output layers for cross-modal distillation and demonstrate that the proposed method effectively compensates for the shortcomings of the existing label-interpolation-based distillation method. In addition, we extend the proposed method to a hierarchical distillation method using LMs trained in different units (senones, monophones, and subwords) and reveal the effectiveness of the hierarchical distillation method through an ablation study. | - |
dc.language | 영어 | - |
dc.language.iso | en | - |
dc.publisher | IEEE | - |
dc.title | KNOWLEDGE DISTILLATION FROM LANGUAGE MODEL TO ACOUSTIC MODEL: A HIERARCHICAL MULTI-TASK LEARNING APPROACH | - |
dc.type | Article | - |
dc.contributor.affiliatedAuthor | Chang, Joon-Hyuk | - |
dc.identifier.doi | 10.1109/ICASSP43922.2022.9747082 | - |
dc.identifier.scopusid | 2-s2.0-85131242598 | - |
dc.identifier.wosid | 000864187908140 | - |
dc.identifier.bibliographicCitation | 2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), pp.8392 - 8396 | - |
dc.relation.isPartOf | 2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP) | - |
dc.citation.title | 2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP) | - |
dc.citation.startPage | 8392 | - |
dc.citation.endPage | 8396 | - |
dc.type.rims | ART | - |
dc.type.docType | Proceedings Paper | - |
dc.description.journalClass | 1 | - |
dc.description.isOpenAccess | N | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Acoustics | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalResearchArea | Engineering | - |
dc.relation.journalWebOfScienceCategory | Acoustics | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Artificial Intelligence | - |
dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
dc.subject.keywordPlus | Computational linguistics | - |
dc.subject.keywordPlus | Deep neural networks | - |
dc.subject.keywordPlus | Learning algorithms | - |
dc.subject.keywordPlus | Learning systems | - |
dc.subject.keywordPlus | Natural language processing systems | - |
dc.subject.keywordPlus | Speech recognition | - |
dc.subject.keywordPlus | Acoustics model | - |
dc.subject.keywordPlus | Automatic speech recognition | - |
dc.subject.keywordPlus | Cross-modal | - |
dc.subject.keywordPlus | Cross-modal distillation | - |
dc.subject.keywordPlus | Distillation method | - |
dc.subject.keywordPlus | Knowledge distillation | - |
dc.subject.keywordPlus | Language model | - |
dc.subject.keywordPlus | Multitask learning | - |
dc.subject.keywordPlus | Performance | - |
dc.subject.keywordPlus | Speech recognition systems | - |
dc.subject.keywordPlus | Distillation | - |
dc.subject.keywordAuthor | automatic speech recognition | - |
dc.subject.keywordAuthor | knowledge distillation | - |
dc.subject.keywordAuthor | multi-task learning | - |
dc.subject.keywordAuthor | cross-modal distillation | - |
dc.subject.keywordAuthor | language model | - |
dc.subject.keywordAuthor | acoustic model | - |
dc.identifier.url | https://ieeexplore.ieee.org/document/9747082 | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
222, Wangsimni-ro, Seongdong-gu, Seoul, 04763, Korea+82-2-2220-1365
COPYRIGHT © 2021 HANYANG UNIVERSITY.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.