PWS-DVC: Enhancing Weakly Supervised Dense Video Captioning with Pretraining Approach
DC Field | Value | Language |
---|---|---|
dc.contributor.author | CHOI, WANGYU | - |
dc.contributor.author | CHEN, JIASI | - |
dc.contributor.author | YOON, JONGWON | - |
dc.date.accessioned | 2023-11-24T02:30:54Z | - |
dc.date.available | 2023-11-24T02:30:54Z | - |
dc.date.issued | 2023-11 | - |
dc.identifier.issn | 2169-3536 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/erica/handle/2021.sw.erica/115648 | - |
dc.description.abstract | In recent times, there has been a notable increase in efforts to simultaneously comprehend vision and language, driven by the availability of video-related datasets and advancements in language models within the domain of natural language processing. Dense video captioning poses a significant challenge in understanding untrimmed video and generating several event-based sentences to describe the video. Numerous endeavors have been undertaken to enhance the efficacy of the dense video captioning task by the utilization of various approaches, such as bottom-up, top-down, parallel pipeline, pretraining, etc. In contrast, the weakly supervised dense video captioning method presents a highly promising strategy for generating dense video captions solely based on captions, without relying on any knowledge of ground-truth events, which distinguishes it from widely employed approaches. Nevertheless, this approach has a drawback that inadequate captions might hurt both event localization and captioning. This paper introduces PWS-DVC, a novel approach aimed at enhancing the performance of weakly supervised dense video captioning. PWS-DVC’s event captioning module is initially trained on video-clip datasets, which are extensively accessible video datasets by leveraging the absence of ground-truth data during training. Subsequently, it undergoes fine-tuning specifically for dense video captioning. In order to demonstrate the efficacy of PWS-DVC, we conduct comparative experiments with state-of-the-art methods using the ActivityNet Captions dataset. The findings indicate that PWS-DVC exhibits improved performance in comparison to current approaches in weakly supervised dense video captioning. | - |
dc.format.extent | 13 | - |
dc.language | 영어 | - |
dc.language.iso | ENG | - |
dc.publisher | Institute of Electrical and Electronics Engineers Inc. | - |
dc.title | PWS-DVC: Enhancing Weakly Supervised Dense Video Captioning with Pretraining Approach | - |
dc.type | Article | - |
dc.publisher.location | 미국 | - |
dc.identifier.doi | 10.1109/ACCESS.2023.3331756 | - |
dc.identifier.scopusid | 2-s2.0-85177040789 | - |
dc.identifier.wosid | 001118487300001 | - |
dc.identifier.bibliographicCitation | IEEE Access, v.11, pp 128162 - 128174 | - |
dc.citation.title | IEEE Access | - |
dc.citation.volume | 11 | - |
dc.citation.startPage | 128162 | - |
dc.citation.endPage | 128174 | - |
dc.type.docType | Article | - |
dc.description.isOpenAccess | Y | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalResearchArea | Engineering | - |
dc.relation.journalResearchArea | Telecommunications | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Information Systems | - |
dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
dc.relation.journalWebOfScienceCategory | Telecommunications | - |
dc.subject.keywordAuthor | Cross-modal video-text comprehension | - |
dc.subject.keywordAuthor | dense video captioning | - |
dc.subject.keywordAuthor | event localization in videos | - |
dc.subject.keywordAuthor | fine-tuning for dense captioning | - |
dc.subject.keywordAuthor | natural language processing in videos | - |
dc.subject.keywordAuthor | pretraining | - |
dc.subject.keywordAuthor | retraining for video understanding | - |
dc.subject.keywordAuthor | video description generation | - |
dc.subject.keywordAuthor | weakly supervised | - |
dc.identifier.url | https://ieeexplore.ieee.org/document/10314490 | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
55 Hanyangdeahak-ro, Sangnok-gu, Ansan, Gyeonggi-do, 15588, Korea+82-31-400-4269 sweetbrain@hanyang.ac.kr
COPYRIGHT © 2021 HANYANG UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.