Background-Aware Robust Context Learning for Weakly-Supervised Temporal Action Localization
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Kim, Jinah | - |
dc.contributor.author | Cho, Jungchan | - |
dc.date.accessioned | 2022-08-29T03:40:07Z | - |
dc.date.available | 2022-08-29T03:40:07Z | - |
dc.date.created | 2022-08-19 | - |
dc.date.issued | 2022-06 | - |
dc.identifier.issn | 2169-3536 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/gachon/handle/2020.sw.gachon/85325 | - |
dc.description.abstract | Weakly supervised temporal action localization (WTAL) aims to localize temporal intervals of actions in an untrimmed video using only video-level action labels. Although the learning of the background is an important issue in WTAL, most previous studies have not utilized an effective background. In this study, we propose a novel method for robustly separating contexts, e.g., action-like background, from the foreground to more accurately localize the action intervals. First, we detect background segments based on their probabilities to minimize the impact of background estimation errors. Second, we define the entropy boundary of the foreground and the positive distance between the boundary and background entropy. The background probability and entropy boundary allow the segment-level classifier to robustly learn the background. Third, we improve the performance of the overall actionness model based on a consensus of the RGB and flow features. The results of extensive experiments demonstrate that the proposed method learns the context separately from the action, consequently achieving new state-of-the-art results on the THUMOS-14 and ActivityNet-1.2 benchmarks. We also confirm that using feature adaptation helps overcome the limitation of a pretrained feature extractor on datasets that contain many backgrounds, such as THUMOS-14. | - |
dc.language | 영어 | - |
dc.language.iso | en | - |
dc.publisher | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC | - |
dc.relation.isPartOf | IEEE ACCESS | - |
dc.title | Background-Aware Robust Context Learning for Weakly-Supervised Temporal Action Localization | - |
dc.type | Article | - |
dc.type.rims | ART | - |
dc.description.journalClass | 1 | - |
dc.identifier.wosid | 000815511400001 | - |
dc.identifier.doi | 10.1109/ACCESS.2022.3183789 | - |
dc.identifier.bibliographicCitation | IEEE ACCESS, v.10, pp.65315 - 65325 | - |
dc.description.isOpenAccess | Y | - |
dc.identifier.scopusid | 2-s2.0-85132726685 | - |
dc.citation.endPage | 65325 | - |
dc.citation.startPage | 65315 | - |
dc.citation.title | IEEE ACCESS | - |
dc.citation.volume | 10 | - |
dc.contributor.affiliatedAuthor | Kim, Jinah | - |
dc.contributor.affiliatedAuthor | Cho, Jungchan | - |
dc.type.docType | Article | - |
dc.subject.keywordAuthor | Entropy | - |
dc.subject.keywordAuthor | Feature extraction | - |
dc.subject.keywordAuthor | Location awareness | - |
dc.subject.keywordAuthor | Context modeling | - |
dc.subject.keywordAuthor | Annotations | - |
dc.subject.keywordAuthor | Training | - |
dc.subject.keywordAuthor | Reliability | - |
dc.subject.keywordAuthor | Temporal action localization | - |
dc.subject.keywordAuthor | entropy maximization | - |
dc.subject.keywordAuthor | context learning | - |
dc.subject.keywordAuthor | feature adaptation | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalResearchArea | Engineering | - |
dc.relation.journalResearchArea | Telecommunications | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Information Systems | - |
dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
dc.relation.journalWebOfScienceCategory | Telecommunications | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
1342, Seongnam-daero, Sujeong-gu, Seongnam-si, Gyeonggi-do, Republic of Korea(13120)031-750-5114
COPYRIGHT 2020 Gachon University All Rights Reserved.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.