Visual context embeddings for zero-shot recognition
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Cho, Gunhee | - |
dc.contributor.author | Choi, Yong Suk | - |
dc.date.accessioned | 2022-07-06T04:12:10Z | - |
dc.date.available | 2022-07-06T04:12:10Z | - |
dc.date.created | 2022-06-03 | - |
dc.date.issued | 2022-04 | - |
dc.identifier.issn | 0000-0000 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/hanyang/handle/2021.sw.hanyang/138798 | - |
dc.description.abstract | Existing word-embeddings have performed well in various downstream tasks, but there may be a bias towards the text domain because they are learned from a text corpus. When word-embeddings are used in the Zero-Shot Recognition(ZSR) task, the task becomes a mapping problem between two completely different heterogeneous domains, a low-level visual feature domain, and a word embedding domain, and due to the bias of word-embeddings, it was not easy to learn this mapping function. However, if the context of the visual domain can be learned and embedded, the mapping function of ZSR will be much easier to converge because it only needs to learn the mapping between domains that are more correlated to each other. Therefore, in this paper, we propose a new methodology for embedding the context contained in the visual domain using the annotation information collected from the image dataset. In addition, to utilize the annotations collected from the image dataset for embedding, we proposed a new distance formula to measure the contextual distance between the bounding boxes of objects. Finally, it was verified through various experiments on two datasets that the embeddings learned by our new methodology performed well when applied to ZSR. | - |
dc.language | 영어 | - |
dc.language.iso | en | - |
dc.publisher | Association for Computing Machinery | - |
dc.title | Visual context embeddings for zero-shot recognition | - |
dc.type | Article | - |
dc.contributor.affiliatedAuthor | Choi, Yong Suk | - |
dc.identifier.doi | 10.1145/3477314.3507071 | - |
dc.identifier.scopusid | 2-s2.0-85130354278 | - |
dc.identifier.wosid | 000946564100142 | - |
dc.identifier.bibliographicCitation | Proceedings of the ACM Symposium on Applied Computing, pp.1039 - 1047 | - |
dc.relation.isPartOf | Proceedings of the ACM Symposium on Applied Computing | - |
dc.citation.title | Proceedings of the ACM Symposium on Applied Computing | - |
dc.citation.startPage | 1039 | - |
dc.citation.endPage | 1047 | - |
dc.type.rims | ART | - |
dc.type.docType | Proceedings Paper | - |
dc.description.journalClass | 1 | - |
dc.description.isOpenAccess | N | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Interdisciplinary Applications | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Theory & Methods | - |
dc.subject.keywordPlus | Computer vision | - |
dc.subject.keywordPlus | Mapping | - |
dc.subject.keywordPlus | Semantics | - |
dc.subject.keywordPlus | Down-stream | - |
dc.subject.keywordPlus | Embeddings | - |
dc.subject.keywordPlus | Image datasets | - |
dc.subject.keywordPlus | Learn+ | - |
dc.subject.keywordPlus | Mapping functions | - |
dc.subject.keywordPlus | Semantic embedding | - |
dc.subject.keywordPlus | Text corpora | - |
dc.subject.keywordPlus | Visual context | - |
dc.subject.keywordPlus | Zero-shot learning | - |
dc.subject.keywordPlus | Zero-shot recognition | - |
dc.subject.keywordPlus | Embeddings | - |
dc.subject.keywordAuthor | semantic embeddings | - |
dc.subject.keywordAuthor | zero-shot learning | - |
dc.subject.keywordAuthor | zero-shot recognition | - |
dc.identifier.url | https://dl.acm.org/doi/10.1145/3477314.3507071 | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
222, Wangsimni-ro, Seongdong-gu, Seoul, 04763, Korea+82-2-2220-1365
COPYRIGHT © 2021 HANYANG UNIVERSITY.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.