Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Ground-Truth Labels Matter: A Deeper Look into Input-Label Demonstrations

Full metadata record
DC Field Value Language
dc.contributor.authorYoo, Kang Min-
dc.contributor.authorKim, Junyeob-
dc.contributor.authorKim, Hyuhng Joon-
dc.contributor.authorCho, Hyunsoo-
dc.contributor.authorJo, Hwiyeol-
dc.contributor.authorLee, Sang-Woo-
dc.contributor.authorLee, Sang-goo-
dc.contributor.authorKim, Tae Uk-
dc.date.accessioned2023-08-07T07:44:12Z-
dc.date.available2023-08-07T07:44:12Z-
dc.date.created2023-07-21-
dc.date.issued2022-10-
dc.identifier.urihttps://scholarworks.bwise.kr/hanyang/handle/2021.sw.hanyang/188896-
dc.description.abstractDespite recent explosion of interests in in-context learning, the underlying mechanism and the precise impact of the quality of demonstrations remain elusive. Intuitively, ground-truth labels should have as much impact in in-context learning (ICL) as supervised learning, but recent work reported that the input-label correspondence is significantly less important than previously thought. Intrigued by this counter-intuitive observation, we re-examine the importance of ground-truth labels in in-context learning. With the introduction of two novel metrics, namely Label-Correctness Sensitivity and Ground-truth Label Effect Ratio (GLER), we were able to conduct quantifiable analysis on the impact of ground-truth label demonstrations. Through extensive analyses, we find that the correct input-label mappings can have varying impacts on the downstream in-context learning performances, depending on the experimental configuration. Through additional studies, we identify key components, such as the verbosity of prompt templates and the language model size, as the controlling factor to achieve more noise-resilient ICL.-
dc.language영어-
dc.language.isoen-
dc.publisherAssociation for Computational Linguistics-
dc.titleGround-Truth Labels Matter: A Deeper Look into Input-Label Demonstrations-
dc.typeArticle-
dc.contributor.affiliatedAuthorKim, Tae Uk-
dc.identifier.doi10.48550/arXiv.2205.12685-
dc.identifier.bibliographicCitationEmpirical Methods in Natural Language Processing, pp.1 - 16-
dc.relation.isPartOfEmpirical Methods in Natural Language Processing-
dc.citation.titleEmpirical Methods in Natural Language Processing-
dc.citation.startPage1-
dc.citation.endPage16-
dc.type.rimsART-
dc.type.docTypeProceeding-
dc.description.journalClass1-
dc.description.isOpenAccessN-
dc.description.journalRegisteredClassother-
dc.identifier.urlhttps://arxiv.org/abs/2205.12685-
Files in This Item
Go to Link
Appears in
Collections
서울 공과대학 > 서울 컴퓨터소프트웨어학부 > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Kim, Taeuk photo

Kim, Taeuk
COLLEGE OF ENGINEERING (SCHOOL OF COMPUTER SCIENCE)
Read more

Altmetrics

Total Views & Downloads

BROWSE