Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Selective Token Generation for Few-shot Natural Language Generation

Authors
Jo, DaejinKwon, TaehwanKim, Eun-SolKim, Sungwoong
Issue Date
Oct-2022
Publisher
Association for Computational Linguistics (ACL)
Citation
Proceedings - International Conference on Computational Linguistics, COLING, v.29, no.1, pp.5837 - 5856
Indexed
SCOPUS
Journal Title
Proceedings - International Conference on Computational Linguistics, COLING
Volume
29
Number
1
Start Page
5837
End Page
5856
URI
https://scholarworks.bwise.kr/hanyang/handle/2021.sw.hanyang/189021
DOI
10.48550/arXiv.2209.08206
ISSN
2951-2093
Abstract
Natural language modeling with limited training data is a challenging problem, and many algorithms make use of large-scale pretrained language models (PLMs) for this due to its great generalization ability. Among them, additive learning that incorporates a task-specific adapter on top of the fixed large-scale PLM has been popularly used in the few-shot setting. However, this added adapter is still easy to disregard the knowledge of the PLM especially for few-shot natural language generation (NLG) since an entire sequence is usually generated by only the newly trained adapter. Therefore, in this work, we develop a novel additive learning algorithm based on reinforcement learning (RL) that selectively outputs language tokens between the task-general PLM and the task-specific adapter during both training and inference. This output token selection over the two generators allows the adapter to take into account solely the task-relevant parts in sequence generation, and therefore makes it more robust to overfitting as well as more stable in RL training. In addition, to obtain the complementary adapter from the PLM for each few-shot task, we exploit a separate selecting module that is also simultaneously trained using RL. Experimental results on various few-shot NLG tasks including question answering, data-to-text generation and text summarization demonstrate that the proposed selective token generation significantly outperforms the previous additive learning algorithms based on the PLMs.
Files in This Item
Go to Link
Appears in
Collections
서울 공과대학 > 서울 컴퓨터소프트웨어학부 > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Kim, Eun Sol photo

Kim, Eun Sol
COLLEGE OF ENGINEERING (SCHOOL OF COMPUTER SCIENCE)
Read more

Altmetrics

Total Views & Downloads

BROWSE