Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

CTRL: Continual Representation Learning to Transfer Information of Pre-trained for WAV2VEC 2.0

Full metadata record
DC Field Value Language
dc.contributor.authorLee, Jae-Hong-
dc.contributor.authorLee, Chae-Won-
dc.contributor.authorChoi, Jin-Seong-
dc.contributor.authorChang, Joon-Hyuk-
dc.contributor.authorSeong, Woo Kyeong-
dc.contributor.authorLee, Jeonghan-
dc.date.accessioned2022-12-20T06:25:10Z-
dc.date.available2022-12-20T06:25:10Z-
dc.date.created2022-11-02-
dc.date.issued2022-09-
dc.identifier.issn2308-457X-
dc.identifier.urihttps://scholarworks.bwise.kr/hanyang/handle/2021.sw.hanyang/173090-
dc.description.abstractRepresentation models such as WAV2VEC 2.0 (W2V2) show remarkable speech recognition performance by pre-training only on unlabeled datasets and finetuning on a small amount of labeled dataset. It is crucial to train on datasets of multiple domains to obtain a richer representation of such a model. The conventional approach used for handling multiple domains is training a model on a merged dataset from scratch. However, representation learning requires excessive computation for pre-training, which becomes a severe problem as the size of the dataset increases. In this study, we present continual representation learning (CTRL), a framework that leverages continual learning methods to continually retrain the pre-trained representation model while transferring information of the previous model without the historical dataset. The framework conducts continual pre-training for pre-trained W2V2 using the redesigned continual learning method for self-supervised learning. To evaluate our framework, we continually pre-train W2V2 with CTRL in the following order: Librispeech, Wall Street Journal, and TED-LIUM V3. The results demonstrate that the proposed approach improves the speech recognition performance of all three datasets compared with that of baseline W2V2 pre-trained on Librispeech.-
dc.language영어-
dc.language.isoen-
dc.publisherInternational Speech Communication Association-
dc.titleCTRL: Continual Representation Learning to Transfer Information of Pre-trained for WAV2VEC 2.0-
dc.typeArticle-
dc.contributor.affiliatedAuthorChang, Joon-Hyuk-
dc.identifier.doi10.21437/Interspeech.2022-10063-
dc.identifier.scopusid2-s2.0-85140099591-
dc.identifier.wosid000900724503113-
dc.identifier.bibliographicCitationProceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH, v.2022-September, pp.3398 - 3402-
dc.relation.isPartOfProceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH-
dc.citation.titleProceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH-
dc.citation.volume2022-September-
dc.citation.startPage3398-
dc.citation.endPage3402-
dc.type.rimsART-
dc.type.docTypeProceedings Paper-
dc.description.journalClass1-
dc.description.isOpenAccessN-
dc.description.journalRegisteredClassscopus-
dc.relation.journalResearchAreaAcoustics-
dc.relation.journalResearchAreaAudiology & Speech-Language Pathology-
dc.relation.journalResearchAreaComputer Science-
dc.relation.journalResearchAreaEngineering-
dc.relation.journalWebOfScienceCategoryAcoustics-
dc.relation.journalWebOfScienceCategoryAudiology & Speech-Language Pathology-
dc.relation.journalWebOfScienceCategoryComputer Science, Artificial Intelligence-
dc.relation.journalWebOfScienceCategoryEngineering, Electrical & Electronic-
dc.subject.keywordPlusLearning systems-
dc.subject.keywordPlusSpeech communication-
dc.subject.keywordPlusSupervised learning-
dc.subject.keywordPlusSpeech recognition-
dc.subject.keywordPlusContinual learning-
dc.subject.keywordPlusDomain adaptation-
dc.subject.keywordPlusLearning methods-
dc.subject.keywordPlusMultiple domains-
dc.subject.keywordPlusPre-training-
dc.subject.keywordPlusRepresentation learning-
dc.subject.keywordPlusRepresentation model-
dc.subject.keywordPlusSemi-supervised learning-
dc.subject.keywordPlusSpeech recognition performance-
dc.subject.keywordPlusTransfer information-
dc.subject.keywordAuthorcontinual learning-
dc.subject.keywordAuthordomain adaptation-
dc.subject.keywordAuthorrepresentation learning-
dc.subject.keywordAuthorsemi-supervised learning-
dc.subject.keywordAuthorspeech recognition-
dc.identifier.urlhttps://www.isca-speech.org/archive/interspeech_2022/lee22i_interspeech.html-
Files in This Item
Go to Link
Appears in
Collections
서울 공과대학 > 서울 융합전자공학부 > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Chang, Joon-Hyuk photo

Chang, Joon-Hyuk
COLLEGE OF ENGINEERING (SCHOOL OF ELECTRONIC ENGINEERING)
Read more

Altmetrics

Total Views & Downloads

BROWSE