Enriched CNN-Transformer Feature Aggregation Networks for Super-Resolution
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Yoo, Jinsu | - |
dc.contributor.author | Kim, Taehoon | - |
dc.contributor.author | Lee, Sihaeng | - |
dc.contributor.author | Kim, Seung Hwan | - |
dc.contributor.author | Lee, Honglak | - |
dc.contributor.author | Kim, Tae Hyun | - |
dc.date.accessioned | 2023-03-13T07:21:08Z | - |
dc.date.available | 2023-03-13T07:21:08Z | - |
dc.date.created | 2023-03-08 | - |
dc.date.issued | 2023-01 | - |
dc.identifier.issn | 0000-0000 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/hanyang/handle/2021.sw.hanyang/182538 | - |
dc.description.abstract | Recent transformer-based super-resolution (SR) methods have achieved promising results against conventional CNN-based methods. However, these approaches suffer from essential shortsightedness created by only utilizing the standard self-attention-based reasoning. In this paper, we introduce an effective hybrid SR network to aggregate enriched features, including local features from CNNs and long-range multi-scale dependencies captured by transformers. Specifically, our network comprises transformer and convolutional branches, which synergetically complement each representation during the restoration procedure. Furthermore, we propose a cross-scale token attention module, allowing the transformer branch to exploit the informative relationships among tokens across different scales efficiently. Our proposed method achieves state-of-the-art SR results on numerous benchmark datasets. | - |
dc.language | 영어 | - |
dc.language.iso | en | - |
dc.publisher | Institute of Electrical and Electronics Engineers Inc. | - |
dc.title | Enriched CNN-Transformer Feature Aggregation Networks for Super-Resolution | - |
dc.type | Article | - |
dc.contributor.affiliatedAuthor | Kim, Tae Hyun | - |
dc.identifier.doi | 10.1109/WACV56688.2023.00493 | - |
dc.identifier.scopusid | 2-s2.0-85149001703 | - |
dc.identifier.wosid | 000971500205006 | - |
dc.identifier.bibliographicCitation | Proceedings - 2023 IEEE Winter Conference on Applications of Computer Vision, WACV 2023, pp.4945 - 4954 | - |
dc.relation.isPartOf | Proceedings - 2023 IEEE Winter Conference on Applications of Computer Vision, WACV 2023 | - |
dc.citation.title | Proceedings - 2023 IEEE Winter Conference on Applications of Computer Vision, WACV 2023 | - |
dc.citation.startPage | 4945 | - |
dc.citation.endPage | 4954 | - |
dc.type.rims | ART | - |
dc.type.docType | Proceedings Paper | - |
dc.description.journalClass | 1 | - |
dc.description.isOpenAccess | N | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalResearchArea | Engineering | - |
dc.relation.journalResearchArea | Imaging Science & Photographic Technology | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Artificial Intelligence | - |
dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
dc.relation.journalWebOfScienceCategory | Imaging Science & Photographic Technology | - |
dc.subject.keywordPlus | Computer vision | - |
dc.subject.keywordPlus | Optical resolving power | - |
dc.subject.keywordPlus | Color photography | - |
dc.subject.keywordPlus | Aggregation network | - |
dc.subject.keywordPlus | Algorithm: computational photography | - |
dc.subject.keywordPlus | Computational photography | - |
dc.subject.keywordPlus | Feature aggregation | - |
dc.subject.keywordPlus | Images synthesis | - |
dc.subject.keywordPlus | Low-level and physic-based vision | - |
dc.subject.keywordPlus | Physics based vision | - |
dc.subject.keywordPlus | Superresolution | - |
dc.subject.keywordPlus | Superresolution methods | - |
dc.subject.keywordPlus | Video synthesis | - |
dc.subject.keywordAuthor | Algorithms: Computational photography | - |
dc.subject.keywordAuthor | image and video synthesis | - |
dc.subject.keywordAuthor | Low-level and physics-based vision | - |
dc.identifier.url | https://ieeexplore.ieee.org/document/10030797 | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
222, Wangsimni-ro, Seongdong-gu, Seoul, 04763, Korea+82-2-2220-1365
COPYRIGHT © 2021 HANYANG UNIVERSITY.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.