Robust Deep Multi-modal Learning Based on Gated Information Fusion Network
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Kim, J. | - |
dc.contributor.author | Koh, J. | - |
dc.contributor.author | Kim, Y. | - |
dc.contributor.author | Choi, J. | - |
dc.contributor.author | Hwang, Y. | - |
dc.contributor.author | Choi, J.W. | - |
dc.date.accessioned | 2021-08-09T06:45:22Z | - |
dc.date.available | 2021-08-09T06:45:22Z | - |
dc.date.created | 2021-08-09 | - |
dc.date.issued | 2018-12-02 | - |
dc.identifier.issn | 0302-9743 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/hanyang/handle/2021.sw.hanyang/84793 | - |
dc.description.abstract | The goal of multi-modal learning is to use complementary information on the relevant task provided by the multiple modalities to achieve reliable and robust performance. Recently, deep learning has led significant improvement in multi-modal learning by allowing for fusing high level features obtained at intermediate layers of the deep neural network. This paper addresses a problem of designing robust deep multi-modal learning architecture in the presence of the modalities degraded in quality. We introduce deep fusion architecture for object detection which processes each modality using the separate convolutional neural network (CNN) and constructs the joint feature maps by combining the intermediate features obtained by the CNNs. In order to facilitate the robustness to the degraded modalities, we employ the gated information fusion (GIF) network which weights the contribution from each modality according to the input feature maps to be fused. The combining weights are determined by applying the convolutional layers followed by the sigmoid function to the concatenated intermediate feature maps. The whole network including the CNN backbone and GIF is trained in an end-to-end fashion. Our experiments show that the proposed GIF network offers the additional architectural flexibility to achieve the robust performance in handling some degraded modalities. ? 2019, Springer Nature Switzerland AG. | - |
dc.language | 영어 | - |
dc.language.iso | en | - |
dc.publisher | Springer Verlag | - |
dc.title | Robust Deep Multi-modal Learning Based on Gated Information Fusion Network | - |
dc.type | Conference | - |
dc.contributor.affiliatedAuthor | Choi, J.W. | - |
dc.identifier.scopusid | 2-s2.0-85066846557 | - |
dc.identifier.bibliographicCitation | 14th Asian Conference on Computer Vision, ACCV 2018, pp.90 - 106 | - |
dc.relation.isPartOf | 14th Asian Conference on Computer Vision, ACCV 2018 | - |
dc.relation.isPartOf | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) | - |
dc.citation.title | 14th Asian Conference on Computer Vision, ACCV 2018 | - |
dc.citation.startPage | 90 | - |
dc.citation.endPage | 106 | - |
dc.citation.conferencePlace | GE | - |
dc.citation.conferencePlace | Perth | - |
dc.citation.conferenceDate | 2018-12-02 | - |
dc.type.rims | CONF | - |
dc.description.journalClass | 1 | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
222, Wangsimni-ro, Seongdong-gu, Seoul, 04763, Korea+82-2-2220-1365
COPYRIGHT © 2021 HANYANG UNIVERSITY.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.