Hint-Based Image Colorization Based on Hierarchical Vision Transformeropen access
- Authors
- Lee, Subin; Jung, Yong Ju
- Issue Date
- Oct-2022
- Publisher
- MDPI
- Keywords
- image colorization; vision transformer; attention map; deep learning
- Citation
- SENSORS, v.22, no.19
- Journal Title
- SENSORS
- Volume
- 22
- Number
- 19
- URI
- https://scholarworks.bwise.kr/gachon/handle/2020.sw.gachon/86026
- DOI
- 10.3390/s22197419
- ISSN
- 1424-8220
- Abstract
- Hint-based image colorization is an image-to-image translation task that aims at creating a full-color image from an input luminance image when a small set of color values for some pixels are given as hints. Though traditional deep-learning-based methods have been proposed in the literature, they are based on convolution neural networks (CNNs) that have strong spatial locality due to the convolution operations. This often causes non-trivial visual artifacts in the colorization results, such as false color and color bleeding artifacts. To overcome this limitation, this study proposes a vision transformer-based colorization network. The proposed hint-based colorization network has a hierarchical vision transformer architecture in the form of an encoder-decoder structure based on transformer blocks. As the proposed method uses the transformer blocks that can learn rich long-range dependency, it can achieve visually plausible colorization results, even with a small number of color hints. Through the verification experiments, the results reveal that the proposed transformer model outperforms the conventional CNN-based models. In addition, we qualitatively analyze the effect of the long-range dependency of the transformer model on hint-based image colorization.
- Files in This Item
- There are no files associated with this item.
- Appears in
Collections - IT융합대학 > 소프트웨어학과 > 1. Journal Articles
![qrcode](https://api.qrserver.com/v1/create-qr-code/?size=55x55&data=https://scholarworks.bwise.kr/gachon/handle/2020.sw.gachon/86026)
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.