SIFNet: Free-form image inpainting using color split-inpaint-fuse approach
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Uddin, S. M. Nadim | - |
dc.contributor.author | Jung, Yong Ju | - |
dc.date.accessioned | 2022-07-19T02:40:27Z | - |
dc.date.available | 2022-07-19T02:40:27Z | - |
dc.date.created | 2022-07-19 | - |
dc.date.issued | 2022-08 | - |
dc.identifier.issn | 1077-3142 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/gachon/handle/2020.sw.gachon/84995 | - |
dc.description.abstract | Recent deep learning-based approaches have shown outstanding performance in generating visually plausible and refined contents for the missing regions in free-form image inpainting tasks. However, most of the existing methods employ a coarse-to-refine approach where the refinement process depends on a single coarse estimation, often leading to texture and structure inconsistencies. Though several existing methods focus on incorporating additional inputs to mitigate this problem, no learning-based studies have investigated the effects of decomposing input corrupted image into luma and chroma images and performing decoupled inpainting of the decomposed components. To this end, we propose a Split-Inpaint-Fuse Network (SIFNet), an end-to-end two-stage inpainting approach that uses a split-inpaint sub-network for separately inpainting the corrupted luma and chroma images using two decoupled branches in the coarse stage and a fusion sub-network for fusing the inpainted luma and chroma images into a refined image in the refinement stage. Additionally, we propose two attention mechanisms for the coarse stage - a progressive context module to find the patch-level feature similarity for the luma image reconstruction and a spatial-channel context module to find important spatial and channel features for the chroma image reconstruction. Experimental results reveal that our Split-Inpaint-Fuse approach outperforms the existing inpainting methods by comparative margins. In addition, extensive ablation studies confirm the effectiveness of the proposed approach, constituting modules and architectural choices. | - |
dc.language | 영어 | - |
dc.language.iso | en | - |
dc.publisher | ACADEMIC PRESS INC ELSEVIER SCIENCE | - |
dc.relation.isPartOf | COMPUTER VISION AND IMAGE UNDERSTANDING | - |
dc.title | SIFNet: Free-form image inpainting using color split-inpaint-fuse approach | - |
dc.type | Article | - |
dc.type.rims | ART | - |
dc.description.journalClass | 1 | - |
dc.identifier.wosid | 000809863400003 | - |
dc.identifier.doi | 10.1016/j.cviu.2022.103446 | - |
dc.identifier.bibliographicCitation | COMPUTER VISION AND IMAGE UNDERSTANDING, v.221 | - |
dc.description.isOpenAccess | N | - |
dc.identifier.scopusid | 2-s2.0-85131243121 | - |
dc.citation.title | COMPUTER VISION AND IMAGE UNDERSTANDING | - |
dc.citation.volume | 221 | - |
dc.contributor.affiliatedAuthor | Uddin, S. M. Nadim | - |
dc.contributor.affiliatedAuthor | Jung, Yong Ju | - |
dc.type.docType | Article | - |
dc.subject.keywordAuthor | Image inpainting | - |
dc.subject.keywordAuthor | Convolutional neural network | - |
dc.subject.keywordAuthor | Generative adversarial networks | - |
dc.subject.keywordAuthor | Attention mechanisms | - |
dc.subject.keywordAuthor | Color space decomposition | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalResearchArea | Engineering | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Artificial Intelligence | - |
dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
1342, Seongnam-daero, Sujeong-gu, Seongnam-si, Gyeonggi-do, Republic of Korea(13120)031-750-5114
COPYRIGHT 2020 Gachon University All Rights Reserved.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.