Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

VerifyMatch: A Semi-Supervised Learning Paradigm for Natural Language Inference with Confidence-Aware MixUp

Full metadata record
DC Field Value Language
dc.contributor.author박서연-
dc.date.accessioned2025-06-12T06:33:07Z-
dc.date.available2025-06-12T06:33:07Z-
dc.date.issued2024-11-
dc.identifier.urihttps://scholarworks.bwise.kr/erica/handle/2021.sw.erica/125531-
dc.description.abstractWhile natural language inference (NLI) has emerged as a prominent task for evaluating a model's capability to perform natural language understanding, creating large benchmarks for training deep learning models imposes a significant challenge since it requires extensive human annotations.To overcome this, we propose to construct pseudo-generated samples (premise-hypothesis pairs) using class-specific fine-tuned large language models (LLMs) thereby reducing the human effort and the costs in annotating large amounts of data.However, despite the impressive performance of LLMs, it is necessary to verify that the pseudo-generated labels are actually correct.Towards this goal, in this paper, we propose VerifyMatch, a semi-supervised learning (SSL) approach in which the LLM pseudo-labels guide the training of the SSL model and, at the same time, the SSL model acts as a verifier of the LLM-generated data.In our approach, we retain all pseudo-labeled samples, but to ensure unlabeled data quality, we further propose to use MixUp whenever the verifier does not agree with the LLM-generated label or when they both agree on the label but the verifier has a low confidence-lower than an adaptive confidence threshold.We achieve competitive accuracy compared to strong baselines for NLI datasets in low-resource settings. © 2024 Association for Computational Linguistics.-
dc.format.extent17-
dc.language영어-
dc.language.isoENG-
dc.publisherAssociation for Computational Linguistics-
dc.titleVerifyMatch: A Semi-Supervised Learning Paradigm for Natural Language Inference with Confidence-Aware MixUp-
dc.typeArticle-
dc.identifier.doi10.18653/v1/2024.emnlp-main.1076-
dc.identifier.scopusid2-s2.0-85217823726-
dc.identifier.bibliographicCitationEmpirical Methods in Natural Language Processing, pp 19319 - 19335-
dc.citation.titleEmpirical Methods in Natural Language Processing-
dc.citation.startPage19319-
dc.citation.endPage19335-
dc.type.docTypeProceeding-
dc.description.isOpenAccessN-
dc.description.journalRegisteredClassscopus-
Files in This Item
There are no files associated with this item.
Appears in
Collections
COLLEGE OF COMPUTING > ERICA 컴퓨터학부 > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Park, Seo Yeon photo

Park, Seo Yeon
ERICA 소프트웨어융합대학 (ERICA 컴퓨터학부)
Read more

Altmetrics

Total Views & Downloads

BROWSE