Improving Domain-Specific ASR with LLM-Generated Contextual Descriptions
- Authors
- Suh,Jiwon; Na,Injae; Jung,Woohwan
- Issue Date
- Sep-2024
- Publisher
- International Speech Communication Association
- Keywords
- automatic speech recognition; contextual biasing; large language model
- Citation
- Conference of the International Speech Communication Association, v.Interspeech 2024, pp 1255 - 1259
- Pages
- 5
- Indexed
- FOREIGN
- Journal Title
- Conference of the International Speech Communication Association
- Volume
- Interspeech 2024
- Start Page
- 1255
- End Page
- 1259
- URI
- https://scholarworks.bwise.kr/erica/handle/2021.sw.erica/121431
- DOI
- 10.21437/Interspeech.2024-377
- ISSN
- 2308-457X
- Abstract
- End-to-end automatic speech recognition (E2E ASR) systems have significantly improved speech recognition through training on extensive datasets. Despite these advancements, they still struggle to accurately recognize domain specific words, such as proper nouns and technical terminologies. To address this problem, we propose a method to utilize the state-of-the-art Whisper without modifying its architecture, preserving its generalization performance while enabling it to leverage descriptions effectively. Moreover, we propose two additional training techniques to improve the domain specific ASR: decoder fine-tuning, and context perturbation. We also propose a method to use a Large Language Model (LLM) to generate descriptions with simple metadata, when descriptions are unavailable. Our experiments demonstrate that proposed methods notably enhance domain-specific ASR accuracy on real-life datasets, with LLMgenerated descriptions outperforming human-crafted ones in effectiveness
- Files in This Item
-
Go to Link
- Appears in
Collections - COLLEGE OF COMPUTING > DEPARTMENT OF ARTIFICIAL INTELLIGENCE > 1. Journal Articles

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.