Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Fine-tuning BERT Models for Keyphrase Extraction in Scientific ArticlesFine-tuning BERT Models for Keyphrase Extraction in Scientific Articles

Authors
임연수서덕진정유철
Issue Date
Jan-2020
Publisher
한국정보기술학회
Keywords
keyphrase extraction; BERT; fine-tuning; embedding; scientific articles
Citation
한국정보기술학회 영문논문지, v.10, no.1, pp.45 - 56
Journal Title
한국정보기술학회 영문논문지
Volume
10
Number
1
Start Page
45
End Page
56
URI
https://scholarworks.bwise.kr/kumoh/handle/2020.sw.kumoh/18134
ISSN
2234-1072
Abstract
Despite extensive research, performance enhancement of keyphrase (KP) extraction remains a challenging problem in modern informatics. Recently, deep learning-based supervised approaches have exhibited state-of-the-art accuracies with respect to this problem, and several of the previously proposed methods utilize Bidirectional Encoder Representations from Transformers (BERT)-based language models. However, few studies have investigated the effective application of BERT-based fine-tuning techniques to the problem of KP extraction. In this paper, we consider the aforementioned problem in the context of scientific articles by investigating the fine-tuning characteristics of two distinct BERT models — BERT (i.e., base BERT model by Google) and SciBERT (i.e., a BERT model trained on scientific text). Three different datasets (WWW, KDD, and Inspec) comprising data obtained from the computer science domain are used to compare the results obtained by fine-tuning BERT and SciBERT in terms of KP extraction.
Files in This Item:
There are no files associated with this item.
Appears in
Collections:
ETC > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher

JUNG, YU CHUL
컴퓨터공학과
Read more

Views & Downloads

MENU