PIFNet: 3D Object Detection Using Joint Image and Point Cloud Features for Autonomous Driving
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Zheng, Wenqi | - |
dc.contributor.author | Xie, Han | - |
dc.contributor.author | Chen, Yunfan | - |
dc.contributor.author | Roh, Jeongjin | - |
dc.contributor.author | Shin, Hyunchul | - |
dc.date.accessioned | 2022-07-18T01:17:11Z | - |
dc.date.available | 2022-07-18T01:17:11Z | - |
dc.date.issued | 2022-04 | - |
dc.identifier.issn | 2076-3417 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/erica/handle/2021.sw.erica/107904 | - |
dc.description.abstract | Owing to its wide range of applications, 3D object detection has attracted increasing attention in computer vision tasks. Most existing 3D object detection methods are based on Lidar point cloud data. However, these methods have some limitations in localization consistency and classification confidence, due to the irregularity and sparsity of Light Detection and Ranging (LiDAR) point cloud data. Inspired by the complementary characteristics of Lidar and camera sensors, we propose a new end-to-end learnable framework named Point-Image Fusion Network (PIFNet) to integrate the LiDAR point cloud and camera images. To resolve the problem of inconsistency in the localization and classification, we designed an Encoder-Decoder Fusion (EDF) module to extract the image features effectively, while maintaining the fine-grained localization information of objects. Furthermore, a new effective fusion module is proposed to integrate the color and texture features from images and the depth information from the point cloud. This module can enhance the irregularity and sparsity problem of the point cloud features by capitalizing the fine-grained information from camera images. In PIFNet, each intermediate feature map is fed into the fusion module to be integrated with its corresponding point-wise features. Furthermore, point-wise features are used instead of voxel-wise features to reduce information loss. Extensive experiments using the KITTI dataset demonstrate the superiority of PIFNet over other state-of-the-art methods. Compared with several state-of-the-art methods, our approach outperformed by 1.97% in mean Average Precision (mAP) and by 2.86% in Average Precision (AP) for the hard cases on the KITTI 3D object detection benchmark. | - |
dc.format.extent | 11 | - |
dc.language | 영어 | - |
dc.language.iso | ENG | - |
dc.publisher | MDPI | - |
dc.title | PIFNet: 3D Object Detection Using Joint Image and Point Cloud Features for Autonomous Driving | - |
dc.type | Article | - |
dc.publisher.location | 스위스 | - |
dc.identifier.doi | 10.3390/app12073686 | - |
dc.identifier.scopusid | 2-s2.0-85128591677 | - |
dc.identifier.wosid | 000781183700001 | - |
dc.identifier.bibliographicCitation | Applied Sciences-basel, v.12, no.7, pp 1 - 11 | - |
dc.citation.title | Applied Sciences-basel | - |
dc.citation.volume | 12 | - |
dc.citation.number | 7 | - |
dc.citation.startPage | 1 | - |
dc.citation.endPage | 11 | - |
dc.type.docType | Article | - |
dc.description.isOpenAccess | Y | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Chemistry | - |
dc.relation.journalResearchArea | Engineering | - |
dc.relation.journalResearchArea | Materials Science | - |
dc.relation.journalResearchArea | Physics | - |
dc.relation.journalWebOfScienceCategory | Chemistry, Multidisciplinary | - |
dc.relation.journalWebOfScienceCategory | Engineering, Multidisciplinary | - |
dc.relation.journalWebOfScienceCategory | Materials Science, Multidisciplinary | - |
dc.relation.journalWebOfScienceCategory | Physics, Applied | - |
dc.subject.keywordAuthor | 3D object detection | - |
dc.subject.keywordAuthor | lidar point cloud | - |
dc.subject.keywordAuthor | camera images | - |
dc.subject.keywordAuthor | object detection | - |
dc.identifier.url | https://www.mdpi.com/2076-3417/12/7/3686 | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
55 Hanyangdeahak-ro, Sangnok-gu, Ansan, Gyeonggi-do, 15588, Korea+82-31-400-4269 sweetbrain@hanyang.ac.kr
COPYRIGHT © 2021 HANYANG UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.