Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

SANTM: Efficient Self-attention-driven Network for Text Matching

Authors
Tiwari, PrayagJaiswal, Amit KumarGarg, SahilYou, Ilsun
Issue Date
Aug-2022
Publisher
Association for Computing Machinary, Inc.
Keywords
Text matching; deep learning; attention mechanism
Citation
ACM Transactions on Internet Technology, v.22, no.3
Journal Title
ACM Transactions on Internet Technology
Volume
22
Number
3
URI
https://scholarworks.bwise.kr/sch/handle/2021.sw.sch/21741
DOI
10.1145/3426971
ISSN
1533-5399
1557-6051
Abstract
Self-attention mechanisms have recently been embraced for a broad range of text-matching applications. Self-attention model takes only one sentence as an input with no extra information, i.e., one can utilize the final hidden state or pooling. However, text-matching problems can be interpreted either in symmetrical or asymmetrical scopes. For instance, paraphrase detection is an asymmetrical task, while textual entailment classification and question-answer matching are considered asymmetrical tasks. In this article, we leverage attractive properties of self-attention mechanism and proposes an attention-based network that incorporates three key components for inter-sequence attention: global pointwise features, preceding attentive features, and contextual features while updating the rest of the components. Our model follows evaluation on two benchmark datasets cover tasks of textual entailment and question-answer matching. The proposed efficient Self-attention-driven Network for Text Matching outperforms the state of the art on the Stanford Natural Language Inference and WikiQA datasets with much fewer parameters.
Files in This Item
There are no files associated with this item.
Appears in
Collections
ETC > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Altmetrics

Total Views & Downloads

BROWSE