Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

CTRL: Continual Representation Learning to Transfer Information of Pre-trained for WAV2VEC 2.0

Authors
Lee, Jae-HongLee, Chae-WonChoi, Jin-SeongChang, Joon-HyukSeong, Woo KyeongLee, Jeonghan
Issue Date
Sep-2022
Publisher
International Speech Communication Association
Keywords
continual learning; domain adaptation; representation learning; semi-supervised learning; speech recognition
Citation
Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH, v.2022-September, pp.3398 - 3402
Indexed
SCOPUS
Journal Title
Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
Volume
2022-September
Start Page
3398
End Page
3402
URI
https://scholarworks.bwise.kr/hanyang/handle/2021.sw.hanyang/173090
DOI
10.21437/Interspeech.2022-10063
ISSN
2308-457X
Abstract
Representation models such as WAV2VEC 2.0 (W2V2) show remarkable speech recognition performance by pre-training only on unlabeled datasets and finetuning on a small amount of labeled dataset. It is crucial to train on datasets of multiple domains to obtain a richer representation of such a model. The conventional approach used for handling multiple domains is training a model on a merged dataset from scratch. However, representation learning requires excessive computation for pre-training, which becomes a severe problem as the size of the dataset increases. In this study, we present continual representation learning (CTRL), a framework that leverages continual learning methods to continually retrain the pre-trained representation model while transferring information of the previous model without the historical dataset. The framework conducts continual pre-training for pre-trained W2V2 using the redesigned continual learning method for self-supervised learning. To evaluate our framework, we continually pre-train W2V2 with CTRL in the following order: Librispeech, Wall Street Journal, and TED-LIUM V3. The results demonstrate that the proposed approach improves the speech recognition performance of all three datasets compared with that of baseline W2V2 pre-trained on Librispeech.
Files in This Item
Go to Link
Appears in
Collections
서울 공과대학 > 서울 융합전자공학부 > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Chang, Joon-Hyuk photo

Chang, Joon-Hyuk
COLLEGE OF ENGINEERING (SCHOOL OF ELECTRONIC ENGINEERING)
Read more

Altmetrics

Total Views & Downloads

BROWSE