Robust Camera Lidar Sensor Fusion Via Deep Gated Information Fusion Network
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Kim, J. | - |
dc.contributor.author | Choi, J. | - |
dc.contributor.author | Kim, Y. | - |
dc.contributor.author | Koh, J. | - |
dc.contributor.author | Chung, C.C. | - |
dc.contributor.author | Choi, J.W. | - |
dc.date.accessioned | 2021-08-11T01:15:28Z | - |
dc.date.available | 2021-08-11T01:15:28Z | - |
dc.date.created | 2021-08-11 | - |
dc.date.issued | 2018-09-26 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/hanyang/handle/2021.sw.hanyang/92595 | - |
dc.description.abstract | In this paper, we introduce a new deep learning architecture for camera and Lidar sensor fusion. The proposed scheme performs 2D object detection using the RGB camera image and the depth, height, and intensity images generated by projecting the 3D Lidar point cloud into camera image plane. The proposed object detector consists of two convolutional neural networks (CNNs) that process the RGB and Lidar images separately as well as the fusion network that combines the feature maps produced at the intermediate layers of the CNNs. We aim to develop a robust object detector that maintains good object detection accuracy even when the quality of the sensor signals is degraded for object detection. Towards this end, we devise the gated fusion unit (GFU) that adjusts the contribution of the feature maps generated by two CNN structures via gating mechanism. Using the GFU, the proposed object detector can fuse the high level feature maps drawn from two modalities with appropriate weights to achieve robust performance. Experiments conducted on the challenging KITTI benchmark show that the proposed camera and Lidar fusion network outperforms the conventional sensor fusion methods even when either of the camera and Lidar sensor signals is corrupted by missing data, occlusion, noise, and illumination change. | - |
dc.language | 영어 | - |
dc.language.iso | en | - |
dc.publisher | Institute of Electrical and Electronics Engineers Inc. | - |
dc.title | Robust Camera Lidar Sensor Fusion Via Deep Gated Information Fusion Network | - |
dc.type | Conference | - |
dc.contributor.affiliatedAuthor | Chung, C.C. | - |
dc.contributor.affiliatedAuthor | Choi, J.W. | - |
dc.identifier.scopusid | 2-s2.0-85056779809 | - |
dc.identifier.bibliographicCitation | 2018 IEEE Intelligent Vehicles Symposium, IV 2018, pp.1620 - 1625 | - |
dc.relation.isPartOf | 2018 IEEE Intelligent Vehicles Symposium, IV 2018 | - |
dc.relation.isPartOf | 2018 IEEE Intelligent Vehicles Symposium (IV) | - |
dc.citation.title | 2018 IEEE Intelligent Vehicles Symposium, IV 2018 | - |
dc.citation.startPage | 1620 | - |
dc.citation.endPage | 1625 | - |
dc.citation.conferencePlace | CC | - |
dc.citation.conferencePlace | Chinese flagship Intelligent Vehicle Proving Center (iVPC) | - |
dc.citation.conferenceDate | 2018-09-26 | - |
dc.type.rims | CONF | - |
dc.description.journalClass | 1 | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
222, Wangsimni-ro, Seongdong-gu, Seoul, 04763, Korea+82-2-2220-1365
COPYRIGHT © 2021 HANYANG UNIVERSITY.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.