Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Regularizing Transformer-based Acoustic Models by Penalizing Attention Weights for Robust Speech Recognition

Authors
Lee, Mun-HakLee, Sang-EonSeong, Ju-SeokChang, Joon-HyukKwon, HaeyoungPark, Chanhee
Issue Date
Sep-2022
Publisher
International Speech Communication Association
Keywords
Acoustic Model; HMM based hybrid ASR; Sparse Feature; Speech Recognition; Transformer
Citation
Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH, v.2022-September, pp.56 - 60
Indexed
SCOPUS
Journal Title
Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
Volume
2022-September
Start Page
56
End Page
60
URI
https://scholarworks.bwise.kr/hanyang/handle/2021.sw.hanyang/173085
DOI
10.21437/Interspeech.2022-362
ISSN
2308-457X
Abstract
The application of deep learning has significantly advanced the performance of automatic speech recognition (ASR) systems. Various components make up an ASR system, such as the acoustic model (AM), language model (LM), and lexicon. Generally, the AM has benefited the most from deep learning. Numerous types of neural network-based AMs have been studied, but the structure that has received the most attention in recent years is the Transformer [1]. In this study, we demonstrate that the Transformer model is more vulnerable to input sparsity compared to the convolutional neural network (CNN) and analyze the cause of performance degradation through structural characteristics of the Transformer. Moreover, we also propose a novel regularization method that makes the transformer model robust against input sparsity. The proposed sparsity regularization method directly regulates attention weights using silence label information in forced-alignment and has the advantage of not requiring additional module training and excessive computation. We tested the proposed method on five benchmarks and observed an average relative error rate reduction (RERR) of 4.7%.
Files in This Item
Go to Link
Appears in
Collections
서울 공과대학 > 서울 융합전자공학부 > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Chang, Joon-Hyuk photo

Chang, Joon-Hyuk
COLLEGE OF ENGINEERING (SCHOOL OF ELECTRONIC ENGINEERING)
Read more

Altmetrics

Total Views & Downloads

BROWSE