Enhanced object detection in bird's eye view using 3D global context inferred from lidar point data
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Kim, Y. | - |
dc.contributor.author | Kim, J. | - |
dc.contributor.author | Koh, J. | - |
dc.contributor.author | Choi, J.W. | - |
dc.date.accessioned | 2021-08-09T06:15:16Z | - |
dc.date.available | 2021-08-09T06:15:16Z | - |
dc.date.created | 2021-08-09 | - |
dc.date.issued | 2019-06-09 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/hanyang/handle/2021.sw.hanyang/84777 | - |
dc.description.abstract | In this paper, we present a new deep neural network architecture, which detects objects in bird's eye view (BEV) using Lidar sensor data in autonomous driving scenarios. The key idea of the proposed method is to improve the accuracy of the object detection by exploiting the 3D global context provided by the whole set of Lidar points. The overall structure of the proposed method consists of two parts: 1) the detection core network (DetNet) and 2) the context extraction network (ConNet). First, the DetNet generates the BEV representation by projecting the Lidar points into the BEV plane and applies the CNN to extract the feature maps locally activated on the objects. The ConNet directly processes the whole set of the Lidar points to produce the 1 × 1× k feature vector capturing the 3D geometrical structure of the surrounding in the global scale. The context vector produced by the ConNet is concatenated to each pixel of the feature maps obtained by the DetNet. The combined feature maps are used to regress the oriented bounding box and identify the category of the object. The experiments evaluated on the public KITTI dataset show that the use of the context feature offers the significant performance gain over the baseline and the proposed object detector achieves the competitive performance as compared to the state of the art 3D object detectors. © 2019 IEEE. | - |
dc.language | 영어 | - |
dc.language.iso | en | - |
dc.publisher | Institute of Electrical and Electronics Engineers Inc. | - |
dc.title | Enhanced object detection in bird's eye view using 3D global context inferred from lidar point data | - |
dc.type | Conference | - |
dc.contributor.affiliatedAuthor | Choi, J.W. | - |
dc.identifier.scopusid | 2-s2.0-85072274504 | - |
dc.identifier.bibliographicCitation | 2019 IEEE Intelligent Vehicles Symposium (IV), pp.2516 - 2521 | - |
dc.relation.isPartOf | 2019 IEEE Intelligent Vehicles Symposium (IV) | - |
dc.relation.isPartOf | 2019 IEEE Intelligent Vehicles Symposium (IV) | - |
dc.citation.title | 2019 IEEE Intelligent Vehicles Symposium (IV) | - |
dc.citation.startPage | 2516 | - |
dc.citation.endPage | 2521 | - |
dc.citation.conferencePlace | FR | - |
dc.citation.conferencePlace | Paris, France | - |
dc.citation.conferenceDate | 2019-06-09 | - |
dc.type.rims | CONF | - |
dc.description.journalClass | 1 | - |
dc.identifier.url | https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8814276 | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
222, Wangsimni-ro, Seongdong-gu, Seoul, 04763, Korea+82-2-2220-1365
COPYRIGHT © 2021 HANYANG UNIVERSITY.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.