Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

VLANet: Video-Language Alignment Network for Weakly-Supervised Video Moment Retrieval

Authors
Ma, M.Yoon, S.Kim, J.Lee, Y.Kang, S.Yoo, C.D.
Issue Date
Aug-2020
Publisher
Springer Science and Business Media Deutschland GmbH
Keywords
Multi-modal learning; Video moment retrieval; Weakly-supervised learning
Citation
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), v.12373, pp 156 - 171
Pages
16
Journal Title
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume
12373
Start Page
156
End Page
171
URI
https://scholarworks.bwise.kr/cau/handle/2019.sw.cau/63280
DOI
10.1007/978-3-030-58604-1_10
ISSN
0302-9743
1611-3349
Abstract
Video Moment Retrieval (VMR) is a task to localize the temporal moment in untrimmed video specified by natural language query. For VMR, several methods that require full supervision for training have been proposed. Unfortunately, acquiring a large number of training videos with labeled temporal boundaries for each query is a labor-intensive process. This paper explores a method for performing VMR in a weakly-supervised manner (wVMR): training is performed without temporal moment labels but only with the text query that describes a segment of the video. Existing methods on wVMR generate multi-scale proposals and apply query-guided attention mechanism to highlight the most relevant proposal. To leverage the weak supervision, contrastive learning is used which predicts higher scores for the correct video-query pairs than for the incorrect pairs. It has been observed that a large number of candidate proposals, coarse query representation, and one-way attention mechanism lead to blurry attention map which limits the localization performance. To address this issue, Video-Language Alignment Network (VLANet) is proposed that learns a sharper attention by pruning out spurious candidate proposals and applying a multi-directional attention mechanism with fine-grained query representation. The Surrogate Proposal Selection module selects a proposal based on the proximity to the query in the joint embedding space, and thus substantially reduces candidate proposals which leads to lower computation load and sharper attention. Next, the Cascaded Cross-modal Attention module considers dense feature interactions and multi-directional attention flows to learn the multi-modal alignment. VLANet is trained end-to-end using contrastive loss which enforces semantically similar videos and queries to cluster. The experiments show that the method achieves state-of-the-art performance on Charades-STA and DiDeMo datasets.
Files in This Item
There are no files associated with this item.
Appears in
Collections
College of Software > Department of Artificial Intelligence > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Kim, Junyeong photo

Kim, Junyeong
소프트웨어대학 (AI학과)
Read more

Altmetrics

Total Views & Downloads

BROWSE