HI-GAN: A hierarchical generative adversarial network for blind denoising of real photographs
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Vo, D.M. | - |
dc.contributor.author | Nguyen, D.M. | - |
dc.contributor.author | Le, T.P. | - |
dc.contributor.author | Lee, Sang-Woong | - |
dc.date.accessioned | 2021-07-14T23:40:23Z | - |
dc.date.available | 2021-07-14T23:40:23Z | - |
dc.date.created | 2021-05-10 | - |
dc.date.issued | 2021-09 | - |
dc.identifier.issn | 0020-0255 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/gachon/handle/2020.sw.gachon/81673 | - |
dc.description.abstract | Although deep convolutional neural networks (DCNNs) and generative adversarial networks (GANs) have achieved remarkable success in image denoising, they have been facing a severe problem of the trade-off between removing noise and artifacts on the one hand, and preserving details on the other. In comparison with conventional DCNNs, GANs might be better in balancing between erasing different types of noise and recovering texture details. However, they often generate fake details and unexpected artifacts in the image owing to the instability of their discriminator during training. In this study, we propose a hierarchical generative adversarial network (HI-GAN) that adopts useful solutions for handling these serious problems of image denoising. Unlike the conventional GAN, the proposed HI-GAN comprises three main generators. The first generator tackles the problem of losing high-frequency features such as edges and texture. This generator was trained together with the discriminator to improve its ability to preserve essential details. The second generator focuses on eliminating the effect of instabilities caused by the discriminator and restoring low-frequency features in the noisy image. Both generators use different criteria to evaluate the denoising performance, and none of them outperformed the other. Then, a third generator is employed to help them cooperate more effectively and boost reconstruction performance. Moreover, to improve the effectiveness of the generators, we also propose a novel boosted residual dense UNet, which is designed to maximize information flow to pass through all convolutional layers in the network. In addition, we propose the AdaRaGAN loss function that effectively prevents the instability of the discriminator of the HI-GAN and improves the denoising performance. The experimental results of the experiments involving challenging datasets of real-world noisy images show that our proposed method is superior to other state-of-the-art denoisers in terms of quantitative metrics and visual quality. Our source codes and datasets for HI-GAN are available at https://github.com/ZeroZero19/HI-GAN.git. © 2021 Elsevier Inc. | - |
dc.language | 영어 | - |
dc.language.iso | en | - |
dc.publisher | ELSEVIER SCIENCE INC | - |
dc.relation.isPartOf | Information Sciences | - |
dc.title | HI-GAN: A hierarchical generative adversarial network for blind denoising of real photographs | - |
dc.type | Article | - |
dc.type.rims | ART | - |
dc.description.journalClass | 1 | - |
dc.identifier.wosid | 000659992900013 | - |
dc.identifier.doi | 10.1016/j.ins.2021.04.045 | - |
dc.identifier.bibliographicCitation | Information Sciences, v.570, pp.225 - 240 | - |
dc.description.isOpenAccess | N | - |
dc.identifier.scopusid | 2-s2.0-85104913953 | - |
dc.citation.endPage | 240 | - |
dc.citation.startPage | 225 | - |
dc.citation.title | Information Sciences | - |
dc.citation.volume | 570 | - |
dc.contributor.affiliatedAuthor | Vo, D.M. | - |
dc.contributor.affiliatedAuthor | Lee, Sang-Woong | - |
dc.type.docType | Article | - |
dc.subject.keywordAuthor | Blind denoising | - |
dc.subject.keywordAuthor | Deep convolutional neural networks | - |
dc.subject.keywordAuthor | Drosophila | - |
dc.subject.keywordAuthor | Fluorescence microscopy images | - |
dc.subject.keywordAuthor | Generative adversarial networks | - |
dc.subject.keywordAuthor | HI-GAN | - |
dc.subject.keywordAuthor | Image denoising | - |
dc.subject.keywordPlus | Convolution | - |
dc.subject.keywordPlus | Economic and social effects | - |
dc.subject.keywordPlus | Fluorescence microscopy | - |
dc.subject.keywordPlus | Image denoising | - |
dc.subject.keywordPlus | Network layers | - |
dc.subject.keywordPlus | Textures | - |
dc.subject.keywordPlus | Adversarial networks | - |
dc.subject.keywordPlus | Blind denoising | - |
dc.subject.keywordPlus | Convolutional neural network | - |
dc.subject.keywordPlus | De-noising | - |
dc.subject.keywordPlus | Drosophilla | - |
dc.subject.keywordPlus | Fluorescence microscopy images | - |
dc.subject.keywordPlus | Generative adversarial network | - |
dc.subject.keywordPlus | Hierarchical generative adversarial network | - |
dc.subject.keywordPlus | Noisy image | - |
dc.subject.keywordPlus | Performance | - |
dc.subject.keywordPlus | Deep neural networks | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
1342, Seongnam-daero, Sujeong-gu, Seongnam-si, Gyeonggi-do, Republic of Korea(13120)031-750-5114
COPYRIGHT 2020 Gachon University All Rights Reserved.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.