Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Self-Distillation into Self-Attention Heads for Improving Transformer-based End-to-End Neural Speaker Diarization

Full metadata record
DC Field Value Language
dc.contributor.authorJeoung, Ye-Rin-
dc.contributor.authorChoi, Jeong-Hwan-
dc.contributor.authorSeong, Ju-Seok-
dc.contributor.authorKyung, JeHyun-
dc.contributor.authorChang, Joon-Hyuk-
dc.date.accessioned2023-10-10T02:35:45Z-
dc.date.available2023-10-10T02:35:45Z-
dc.date.created2023-10-04-
dc.date.issued2023-08-
dc.identifier.issn2308-457X-
dc.identifier.urihttps://scholarworks.bwise.kr/hanyang/handle/2021.sw.hanyang/191793-
dc.description.abstractIn this study, we explore self-distillation (SD) techniques to improve the performance of the transformer-encoder-based self-attentive (SA) end-to-end neural speaker diarization (EEND). We first apply the SD approaches, introduced in the automatic speech recognition field, to the SA-EEND model to confirm their potential for speaker diarization. Then, we propose two novel SD methods for the SA-EEND, which distill the prediction output of the model or the SA heads of the upper blocks into the SA heads of the lower blocks. Consequently, we expect the high-level speaker-discriminative knowledge learned by the upper blocks to be shared across the lower blocks, thereby enabling the SA heads of the lower blocks to effectively capture the discriminative patterns of overlapped speech of multiple speakers. Experimental results on the simulated and CALLHOME datasets show that the SD generally improves the baseline performance, and the proposed methods outperform the conventional SD approaches.-
dc.language영어-
dc.language.isoen-
dc.publisherInternational Speech Communication Association-
dc.titleSelf-Distillation into Self-Attention Heads for Improving Transformer-based End-to-End Neural Speaker Diarization-
dc.typeArticle-
dc.contributor.affiliatedAuthorChang, Joon-Hyuk-
dc.identifier.doi10.21437/Interspeech.2023-1404-
dc.identifier.scopusid2-s2.0-85171550598-
dc.identifier.bibliographicCitationProceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH, v.2023-August, pp.3197 - 3201-
dc.relation.isPartOfProceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH-
dc.citation.titleProceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH-
dc.citation.volume2023-August-
dc.citation.startPage3197-
dc.citation.endPage3201-
dc.type.rimsART-
dc.type.docTypeConference paper-
dc.description.journalClass1-
dc.description.isOpenAccessN-
dc.description.journalRegisteredClassscopus-
dc.subject.keywordAuthorend-to-end neural diarization-
dc.subject.keywordAuthorfine-tuning-
dc.subject.keywordAuthorself-attention mechanism-
dc.subject.keywordAuthorself-distillation-
dc.subject.keywordAuthorspeaker diarization-
dc.identifier.urlhttps://www.isca-speech.org/archive/interspeech_2023/jeoung23_interspeech.html-
Files in This Item
Go to Link
Appears in
Collections
서울 공과대학 > 서울 융합전자공학부 > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Chang, Joon-Hyuk photo

Chang, Joon-Hyuk
COLLEGE OF ENGINEERING (SCHOOL OF ELECTRONIC ENGINEERING)
Read more

Altmetrics

Total Views & Downloads

BROWSE