VerifyMatch: A Semi-Supervised Learning Paradigm for Natural Language Inference with Confidence-Aware MixUp
DC Field | Value | Language |
---|---|---|
dc.contributor.author | 박서연 | - |
dc.date.accessioned | 2025-06-12T06:33:07Z | - |
dc.date.available | 2025-06-12T06:33:07Z | - |
dc.date.issued | 2024-11 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/erica/handle/2021.sw.erica/125531 | - |
dc.description.abstract | While natural language inference (NLI) has emerged as a prominent task for evaluating a model's capability to perform natural language understanding, creating large benchmarks for training deep learning models imposes a significant challenge since it requires extensive human annotations.To overcome this, we propose to construct pseudo-generated samples (premise-hypothesis pairs) using class-specific fine-tuned large language models (LLMs) thereby reducing the human effort and the costs in annotating large amounts of data.However, despite the impressive performance of LLMs, it is necessary to verify that the pseudo-generated labels are actually correct.Towards this goal, in this paper, we propose VerifyMatch, a semi-supervised learning (SSL) approach in which the LLM pseudo-labels guide the training of the SSL model and, at the same time, the SSL model acts as a verifier of the LLM-generated data.In our approach, we retain all pseudo-labeled samples, but to ensure unlabeled data quality, we further propose to use MixUp whenever the verifier does not agree with the LLM-generated label or when they both agree on the label but the verifier has a low confidence-lower than an adaptive confidence threshold.We achieve competitive accuracy compared to strong baselines for NLI datasets in low-resource settings. © 2024 Association for Computational Linguistics. | - |
dc.format.extent | 17 | - |
dc.language | 영어 | - |
dc.language.iso | ENG | - |
dc.publisher | Association for Computational Linguistics | - |
dc.title | VerifyMatch: A Semi-Supervised Learning Paradigm for Natural Language Inference with Confidence-Aware MixUp | - |
dc.type | Article | - |
dc.identifier.doi | 10.18653/v1/2024.emnlp-main.1076 | - |
dc.identifier.scopusid | 2-s2.0-85217823726 | - |
dc.identifier.bibliographicCitation | Empirical Methods in Natural Language Processing, pp 19319 - 19335 | - |
dc.citation.title | Empirical Methods in Natural Language Processing | - |
dc.citation.startPage | 19319 | - |
dc.citation.endPage | 19335 | - |
dc.type.docType | Proceeding | - |
dc.description.isOpenAccess | N | - |
dc.description.journalRegisteredClass | scopus | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
55 Hanyangdeahak-ro, Sangnok-gu, Ansan, Gyeonggi-do, 15588, Korea+82-31-400-4269 sweetbrain@hanyang.ac.kr
COPYRIGHT © 2021 HANYANG UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.