Intra-ensemble: A New Method for Combining Intermediate Outputs in Transformer-based Automatic Speech Recognition
- Authors
- Kim, DoHee; Choi, Jieun; Chang, Joon-Hyuk
- Issue Date
- Aug-2023
- Publisher
- International Speech Communication Association
- Keywords
- ensemble; speech recognition
- Citation
- Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH, v.2023-August, pp.2203 - 2207
- Indexed
- SCOPUS
- Journal Title
- Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
- Volume
- 2023-August
- Start Page
- 2203
- End Page
- 2207
- URI
- https://scholarworks.bwise.kr/hanyang/handle/2021.sw.hanyang/191795
- DOI
- 10.21437/Interspeech.2023-1255
- ISSN
- 2308-457X
- Abstract
- Deep learning models employ various regularization techniques to prevent overfitting and enhance generalization. In particular, an auxiliary loss, as proposed for connectionist temporal classification (CTC) models, demonstrated the potential for intermediate prediction to be useful by enabling sub-models to recognize speech accurately. We propose a new method called Intra-ensemble, which combines these accurate intermediate outputs into a single output for both training and inference, considering the importance of the intermediate layer using learnable parameters. Our approach is applicable to CTC models, attention-based encoder-decoder models, and transducer structures and demonstrated performance improvements of 13.5%, 3.0%, and 4.1% respectively, in the LibriSpeech evaluation. Furthermore, through various analytical experiments, we found that the sub-models contributed significantly to performance improvement.
- Files in This Item
-
Go to Link
- Appears in
Collections - 서울 공과대학 > 서울 융합전자공학부 > 1. Journal Articles
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.