Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Infrared and visible image fusion using a feature attention guided perceptual generative adversarial network

Authors
Chen, YunfanZheng, WenqiShin, Hyunchul
Issue Date
Sep-2023
Publisher
Springer Science and Business Media Deutschland GmbH
Keywords
Deep learning; Feature extraction; Image fusion; Image processing
Citation
Journal of Ambient Intelligence and Humanized Computing, v.14, no.7, pp.9099 - 9112
Indexed
SCOPUS
Journal Title
Journal of Ambient Intelligence and Humanized Computing
Volume
14
Number
7
Start Page
9099
End Page
9112
URI
https://scholarworks.bwise.kr/erica/handle/2021.sw.erica/115132
DOI
10.1007/s12652-022-04414-7
ISSN
1868-5137
Abstract
In recent years, the performance of infrared and visible image fusion has been dramatically improved by using deep learning techniques. However, the fusion results are still not satisfactory as the fused images frequently suffer from blurred details, unenhanced vital regions, and artifacts. To resolve these problems, we have developed a novel feature attention-guided perceptual generative adversarial network (FAPGAN) for fusing infrared and visible images. In FAPGAN, a feature attention module is proposed to incorporate with the generator aiming to produce a fused image that maintains the detailed information while highlighting the vital regions in the source images. Our feature attention module consists of spatial attention and pixel attention parts. The spatial attention aims to enhance the vital regions while the pixel attention aims to make the network focus on high frequency information to retain the detailed information. Furthermore, we introduce a perceptual loss combined with adversarial loss and content loss to optimize the generator. The perceptual loss is to make the fused image more similar to the source infrared image at the semantic level, which can not only make the fused image maintain the vital target and detailed information from the infrared image, but also remove the halo artifacts by reducing the discrepancy. Experimental results on public datasets demonstrate that our FAPGAN is superior to those of state-of-the-art approaches in both subjective visual effect and objective assessment. © 2022, The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature.
Files in This Item
Go to Link
Appears in
Collections
COLLEGE OF ENGINEERING SCIENCES > SCHOOL OF ELECTRICAL ENGINEERING > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Altmetrics

Total Views & Downloads

BROWSE