3D Object Detection Using Frustums and Attention Modules for Images and Point Clouds
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Li,Yiran | - |
dc.contributor.author | Xie,Han | - |
dc.contributor.author | Shin,Hyunchul | - |
dc.date.accessioned | 2023-07-05T05:44:08Z | - |
dc.date.available | 2023-07-05T05:44:08Z | - |
dc.date.issued | 2021-02 | - |
dc.identifier.issn | 1860-4862 | - |
dc.identifier.issn | 1860-4870 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/erica/handle/2021.sw.erica/113276 | - |
dc.description.abstract | Three-dimensional (3D) object detection is essential in autonomous driving. Threedimensional (3D) Lidar sensor can capture three-dimensional objects, such as vehicles, cycles, pedestrians, and other objects on the road. Although Lidar can generate point clouds in 3D space, it still lacks the fine resolution of 2D information. Therefore, Lidar and camera fusion has gradually become a practical method for 3D object detection. Previous strategies focused on the extraction of voxel points and the fusion of feature maps. However, the biggest challenge is in extracting enough edge information to detect small objects. To solve this problem, we found that attention modules are beneficial in detecting small objects. In this work, we developed Frustum ConvNet and attention modules for the fusion of images from a camera and point clouds from a Lidar. Multilayer Perceptron (MLP) and tanh activation functions were used in the attention modules. Furthermore, the attention modules were designed on PointNet to perform multilayer edge detection for 3D object detection. Compared with a previous well-known method, Frustum ConvNet, our method achieved competitive results, with an improvement of 0.27%, 0.43%, and 0.36% in Average Precision (AP) for 3D object detection in easy, moderate, and hard cases, respectively, and an improvement of 0.21%, 0.27%, and 0.01% in AP for Bird’s Eye View (BEV) object detection in easy, moderate, and hard cases, respectively, on the KITTI detection benchmarks. Our method also obtained the best results in four cases in AP on the indoor SUN-RGBD dataset for 3D object detection. Keywords: 3D vision; attention module; fusion; point cloud; vehicle detection | - |
dc.format.extent | 10 | - |
dc.language | 영어 | - |
dc.language.iso | ENG | - |
dc.publisher | Springer International Publishing AG | - |
dc.title | 3D Object Detection Using Frustums and Attention Modules for Images and Point Clouds | - |
dc.type | Article | - |
dc.publisher.location | 스위스 | - |
dc.identifier.doi | 10.3390/signals2010009 | - |
dc.identifier.scopusid | 2-s2.0-85117885871 | - |
dc.identifier.wosid | 001177665800001 | - |
dc.identifier.bibliographicCitation | Signals and Communication Technology, v.2, no.1, pp 98 - 107 | - |
dc.citation.title | Signals and Communication Technology | - |
dc.citation.volume | 2 | - |
dc.citation.number | 1 | - |
dc.citation.startPage | 98 | - |
dc.citation.endPage | 107 | - |
dc.type.docType | 정기학술지(Article(Perspective Article포함)) | - |
dc.description.isOpenAccess | Y | - |
dc.description.journalRegisteredClass | scopus | - |
dc.description.journalRegisteredClass | esci | - |
dc.relation.journalResearchArea | Engineering | - |
dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
dc.subject.keywordAuthor | 3D vision | - |
dc.subject.keywordAuthor | attention module | - |
dc.subject.keywordAuthor | fusion | - |
dc.subject.keywordAuthor | point cloud | - |
dc.subject.keywordAuthor | vehicle detection | - |
dc.identifier.url | https://www.mdpi.com/2624-6120/2/1/9 | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
55 Hanyangdeahak-ro, Sangnok-gu, Ansan, Gyeonggi-do, 15588, Korea+82-31-400-4269 sweetbrain@hanyang.ac.kr
COPYRIGHT © 2021 HANYANG UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.