Detailed Information

Cited 0 time in webofscience Cited 7 time in scopus
Metadata Downloads

Robust Deep Multi-modal Learning Based on Gated Information Fusion Network

Full metadata record
DC Field Value Language
dc.contributor.authorKim, J.-
dc.contributor.authorKoh, J.-
dc.contributor.authorKim, Y.-
dc.contributor.authorChoi, J.-
dc.contributor.authorHwang, Y.-
dc.contributor.authorChoi, J.W.-
dc.date.accessioned2021-08-09T06:45:22Z-
dc.date.available2021-08-09T06:45:22Z-
dc.date.created2021-08-09-
dc.date.issued2018-12-02-
dc.identifier.issn0302-9743-
dc.identifier.urihttps://scholarworks.bwise.kr/hanyang/handle/2021.sw.hanyang/84793-
dc.description.abstractThe goal of multi-modal learning is to use complementary information on the relevant task provided by the multiple modalities to achieve reliable and robust performance. Recently, deep learning has led significant improvement in multi-modal learning by allowing for fusing high level features obtained at intermediate layers of the deep neural network. This paper addresses a problem of designing robust deep multi-modal learning architecture in the presence of the modalities degraded in quality. We introduce deep fusion architecture for object detection which processes each modality using the separate convolutional neural network (CNN) and constructs the joint feature maps by combining the intermediate features obtained by the CNNs. In order to facilitate the robustness to the degraded modalities, we employ the gated information fusion (GIF) network which weights the contribution from each modality according to the input feature maps to be fused. The combining weights are determined by applying the convolutional layers followed by the sigmoid function to the concatenated intermediate feature maps. The whole network including the CNN backbone and GIF is trained in an end-to-end fashion. Our experiments show that the proposed GIF network offers the additional architectural flexibility to achieve the robust performance in handling some degraded modalities. ? 2019, Springer Nature Switzerland AG.-
dc.language영어-
dc.language.isoen-
dc.publisherSpringer Verlag-
dc.titleRobust Deep Multi-modal Learning Based on Gated Information Fusion Network-
dc.typeConference-
dc.contributor.affiliatedAuthorChoi, J.W.-
dc.identifier.scopusid2-s2.0-85066846557-
dc.identifier.bibliographicCitation14th Asian Conference on Computer Vision, ACCV 2018, pp.90 - 106-
dc.relation.isPartOf14th Asian Conference on Computer Vision, ACCV 2018-
dc.relation.isPartOfLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)-
dc.citation.title14th Asian Conference on Computer Vision, ACCV 2018-
dc.citation.startPage90-
dc.citation.endPage106-
dc.citation.conferencePlaceGE-
dc.citation.conferencePlacePerth-
dc.citation.conferenceDate2018-12-02-
dc.type.rimsCONF-
dc.description.journalClass1-
Files in This Item
There are no files associated with this item.
Appears in
Collections
서울 공과대학 > 서울 전기공학전공 > 2. Conference Papers

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Choi, Jun Won photo

Choi, Jun Won
COLLEGE OF ENGINEERING (MAJOR IN ELECTRICAL ENGINEERING)
Read more

Altmetrics

Total Views & Downloads

BROWSE