Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Universal Domain Adaptation for Robust Handling of Distributional Shifts in NLP

Full metadata record
DC Field Value Language
dc.contributor.authorKim, Hyuhng Joon-
dc.contributor.authorCho, Hyunsoo-
dc.contributor.authorLee, Sang-Woo-
dc.contributor.authorKim, Junyeob-
dc.contributor.authorLee, Sang-Goo-
dc.contributor.authorPark, Choonghyun-
dc.contributor.authorYoo, Kang Min-
dc.contributor.authorKim, Taeuk-
dc.date.accessioned2024-06-24T14:00:33Z-
dc.date.available2024-06-24T14:00:33Z-
dc.date.issued2023-12-
dc.identifier.issn0000-0000-
dc.identifier.urihttps://scholarworks.bwise.kr/hanyang/handle/2021.sw.hanyang/194767-
dc.description.abstractWhen deploying machine learning systems to the wild, it is highly desirable for them to effectively leverage prior knowledge to the unfamiliar domain while also firing alarms to anomalous inputs. In order to address these requirements, Universal Domain Adaptation (UniDA) has emerged as a novel research area in computer vision, focusing on achieving both adaptation ability and robustness (i.e., the ability to detect out-of-distribution samples). While UniDA has led significant progress in computer vision, its application on language input still needs to be explored despite its feasibility. In this paper, we propose a comprehensive benchmark for natural language that offers thorough viewpoints of the model's generalizability and robustness. Our benchmark encompasses multiple datasets with varying difficulty levels and characteristics, including temporal shifts and diverse domains. On top of our testbed, we validate existing UniDA methods from computer vision and state-of-the-art domain adaptation techniques from NLP literature, yielding valuable findings: We observe that UniDA methods originally designed for image input can be effectively transferred to the natural language domain while also underscoring the effect of adaptation difficulty in determining the model's performance.-
dc.format.extent18-
dc.language영어-
dc.language.isoENG-
dc.publisherAssociation for Computational Linguistics (ACL)-
dc.titleUniversal Domain Adaptation for Robust Handling of Distributional Shifts in NLP-
dc.typeArticle-
dc.identifier.scopusid2-s2.0-85183306182-
dc.identifier.bibliographicCitationFindings of the Association for Computational Linguistics: EMNLP 2023, pp 5888 - 5905-
dc.citation.titleFindings of the Association for Computational Linguistics: EMNLP 2023-
dc.citation.startPage5888-
dc.citation.endPage5905-
dc.type.docTypeConference paper-
dc.description.isOpenAccessN-
dc.description.journalRegisteredClassscopus-
dc.subject.keywordPlusComputational linguistics-
dc.subject.keywordPlusKnowledge management-
dc.subject.keywordPlusLearning algorithms-
dc.subject.keywordPlusLearning systems-
dc.subject.keywordPlusNatural language processing systems-
Files in This Item
There are no files associated with this item.
Appears in
Collections
서울 공과대학 > 서울 컴퓨터소프트웨어학부 > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Kim, Taeuk photo

Kim, Taeuk
COLLEGE OF ENGINEERING (SCHOOL OF COMPUTER SCIENCE)
Read more

Altmetrics

Total Views & Downloads

BROWSE