A New Deep Learning Based Multi-Spectral Image Fusion Method
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Piao, Jingchun | - |
dc.contributor.author | Chen, Yunfan | - |
dc.contributor.author | Shin, Hyunchul | - |
dc.date.accessioned | 2021-06-22T10:02:23Z | - |
dc.date.available | 2021-06-22T10:02:23Z | - |
dc.date.issued | 2019-06 | - |
dc.identifier.issn | 1099-4300 | - |
dc.identifier.issn | 1099-4300 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/erica/handle/2021.sw.erica/2900 | - |
dc.description.abstract | In this paper, we present a new effective infrared (IR) and visible (VIS) image fusion method by using a deep neural network. In our method, a Siamese convolutional neural network (CNN) is applied to automatically generate a weight map which represents the saliency of each pixel for a pair of source images. A CNN plays a role in automatic encoding an image into a feature domain for classification. By applying the proposed method, the key problems in image fusion, which are the activity level measurement and fusion rule design, can be figured out in one shot. The fusion is carried out through the multi-scale image decomposition based on wavelet transform, and the reconstruction result is more perceptual to a human visual system. In addition, the visual qualitative effectiveness of the proposed fusion method is evaluated by comparing pedestrian detection results with other methods, by using the YOLOv3 object detector using a public benchmark dataset. The experimental results show that our proposed method showed competitive results in terms of both quantitative assessment and visual quality. | - |
dc.language | 영어 | - |
dc.language.iso | ENG | - |
dc.publisher | MDPI | - |
dc.title | A New Deep Learning Based Multi-Spectral Image Fusion Method | - |
dc.type | Article | - |
dc.publisher.location | 스위스 | - |
dc.identifier.doi | 10.3390/e21060570 | - |
dc.identifier.scopusid | 2-s2.0-85068034928 | - |
dc.identifier.wosid | 000475304200031 | - |
dc.identifier.bibliographicCitation | ENTROPY, v.21, no.6 | - |
dc.citation.title | ENTROPY | - |
dc.citation.volume | 21 | - |
dc.citation.number | 6 | - |
dc.type.docType | Article | - |
dc.description.isOpenAccess | N | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Physics | - |
dc.relation.journalWebOfScienceCategory | Physics, Multidisciplinary | - |
dc.subject.keywordPlus | TRANSFORM | - |
dc.subject.keywordPlus | PERFORMANCE | - |
dc.subject.keywordAuthor | image fusion | - |
dc.subject.keywordAuthor | visible | - |
dc.subject.keywordAuthor | infrared | - |
dc.subject.keywordAuthor | convolutional neural network | - |
dc.subject.keywordAuthor | Siamese network | - |
dc.identifier.url | https://www.mdpi.com/1099-4300/21/6/570 | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
55 Hanyangdeahak-ro, Sangnok-gu, Ansan, Gyeonggi-do, 15588, Korea+82-31-400-4269 sweetbrain@hanyang.ac.kr
COPYRIGHT © 2021 HANYANG UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.