Domain-Adaptive Vision Transformers for Generalizing Across Visual Domains
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Cho, Yunsung | - |
dc.contributor.author | Yun, Jungmin | - |
dc.contributor.author | Kwon, JuneHyoung | - |
dc.contributor.author | Kim, Youngbin | - |
dc.date.accessioned | 2023-12-01T01:41:17Z | - |
dc.date.available | 2023-12-01T01:41:17Z | - |
dc.date.issued | 2023 | - |
dc.identifier.issn | 2169-3536 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/cau/handle/2019.sw.cau/68695 | - |
dc.description.abstract | Deep-learning models often struggle to generalize well to unseen domains because of the distribution shift between the training and real-world data. Domain generalization aims to train models that can acquire general features from data across different domains, thereby improving the performance on unseen domains. Inspired by the glance-and-gaze approach, which mimics the way humans perceive the real world, we introduce the domain-adaptive vision transformer (DA-ViT) model, which adopts a human cognitive perspective for domain generalization. We merge glance and gaze blocks to initially capture general information from each block and subsequently acquire more detailed and focused information. Unlike previous methods that predominantly employ convolutional neural networks, we adapted the ViT model to learn features that are robust across different visual domains. DA-ViT is pretrained on the ImageNet 1K dataset and designed to adaptively learn features that are generalizable across various visual domains. We evaluated our adapted model for domain generalization and demonstrated that it outperforms the ResNet50 model based on non-ensemble algorithms by 0.7%p on the VLCS benchmark dataset. Our proposed model introduces a new approach for domain generalization that leverages the capabilities of vision transformers to adapt effectively to diverse visual domains. Author | - |
dc.format.extent | 10 | - |
dc.language | 영어 | - |
dc.language.iso | ENG | - |
dc.publisher | Institute of Electrical and Electronics Engineers Inc. | - |
dc.title | Domain-Adaptive Vision Transformers for Generalizing Across Visual Domains | - |
dc.type | Article | - |
dc.identifier.doi | 10.1109/ACCESS.2023.3324545 | - |
dc.identifier.bibliographicCitation | IEEE Access, v.11, pp 115644 - 115653 | - |
dc.description.isOpenAccess | Y | - |
dc.identifier.wosid | 001091400900001 | - |
dc.identifier.scopusid | 2-s2.0-85174805355 | - |
dc.citation.endPage | 115653 | - |
dc.citation.startPage | 115644 | - |
dc.citation.title | IEEE Access | - |
dc.citation.volume | 11 | - |
dc.type.docType | Article | - |
dc.publisher.location | 미국 | - |
dc.subject.keywordAuthor | Adaptation models | - |
dc.subject.keywordAuthor | cross-attention-based ViT | - |
dc.subject.keywordAuthor | Data models | - |
dc.subject.keywordAuthor | Domain generalization | - |
dc.subject.keywordAuthor | glance and gaze | - |
dc.subject.keywordAuthor | human cognitive approach | - |
dc.subject.keywordAuthor | masked ViT | - |
dc.subject.keywordAuthor | Representation learning | - |
dc.subject.keywordAuthor | Task analysis | - |
dc.subject.keywordAuthor | Training | - |
dc.subject.keywordAuthor | Transformers | - |
dc.subject.keywordAuthor | Visualization | - |
dc.subject.keywordAuthor | ViT | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalResearchArea | Engineering | - |
dc.relation.journalResearchArea | Telecommunications | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Information Systems | - |
dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
dc.relation.journalWebOfScienceCategory | Telecommunications | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
84, Heukseok-ro, Dongjak-gu, Seoul, Republic of Korea (06974)02-820-6194
COPYRIGHT 2019 Chung-Ang University All Rights Reserved.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.