Single-Modal Entropy based Active Learning for Visual Question Answering
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Kim, Dong Jin | - |
dc.contributor.author | Cho, Jae Won | - |
dc.contributor.author | Choi, Jinsoo | - |
dc.contributor.author | Jung, Yunjae | - |
dc.contributor.author | Kweon, In So | - |
dc.date.accessioned | 2023-11-14T08:44:26Z | - |
dc.date.available | 2023-11-14T08:44:26Z | - |
dc.date.created | 2023-07-21 | - |
dc.date.issued | 2021-11 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/hanyang/handle/2021.sw.hanyang/192368 | - |
dc.description.abstract | Constructing a large-scale labeled dataset in the real world, especially for high-level tasks (eg, Visual Question Answering), can be expensive and time-consuming. In addition, with the ever-growing amounts of data and architecture complexity, Active Learning has become an important aspect of computer vision research. In this work, we address Active Learning in the multi-modal setting of Visual Question Answering (VQA). In light of the multi-modal inputs, image and question, we propose a novel method for effective sample acquisition through the use of ad hoc single-modal branches for each input to leverage its information. Our mutual information based sample acquisition strategy Single-Modal Entropic Measure (SMEM) in addition to our self-distillation technique enables the sample acquisitor to exploit all present modalities and find the most informative samples. Our novel idea is simple to implement, cost-efficient, and readily adaptable to other multi-modal tasks. We confirm our findings on various VQA datasets through state-of-the-art performance by comparing to existing Active Learning baselines. | - |
dc.language | 영어 | - |
dc.language.iso | en | - |
dc.publisher | British Machine Vision Association (BMVA) | - |
dc.title | Single-Modal Entropy based Active Learning for Visual Question Answering | - |
dc.type | Article | - |
dc.contributor.affiliatedAuthor | Kim, Dong Jin | - |
dc.identifier.bibliographicCitation | British Machine Vision Conference, pp.1 - 15 | - |
dc.relation.isPartOf | British Machine Vision Conference | - |
dc.citation.title | British Machine Vision Conference | - |
dc.citation.startPage | 1 | - |
dc.citation.endPage | 15 | - |
dc.type.rims | ART | - |
dc.type.docType | Proceeding | - |
dc.description.journalClass | 3 | - |
dc.description.isOpenAccess | N | - |
dc.description.journalRegisteredClass | other | - |
dc.identifier.url | https://www.bmvc2021-virtualconference.com/assets/papers/0138.pdf | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
222, Wangsimni-ro, Seongdong-gu, Seoul, 04763, Korea+82-2-2220-1365
COPYRIGHT © 2021 HANYANG UNIVERSITY.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.