MGTANet: Encoding Sequential LiDAR Points Using Long Short-Term Motion-Guided Temporal Attention for 3D Object Detection
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Koh, Junho | - |
dc.contributor.author | Lee, Junhyung | - |
dc.contributor.author | Lee, Youngwoo | - |
dc.contributor.author | Kim, Jaekyum | - |
dc.contributor.author | Choi, Jun Won | - |
dc.date.accessioned | 2023-09-11T01:54:41Z | - |
dc.date.available | 2023-09-11T01:54:41Z | - |
dc.date.created | 2023-07-20 | - |
dc.date.issued | 2023-02 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/hanyang/handle/2021.sw.hanyang/190401 | - |
dc.description.abstract | Most scanning LiDAR sensors generate a sequence of point clouds in real-time. While conventional 3D object detectors use a set of unordered LiDAR points acquired over a fixed time interval, recent studies have revealed that substantial performance improvement can be achieved by exploiting the spatio-temporal context present in a sequence of LiDAR point sets. In this paper, we propose a novel 3D object detection architecture, which can encode LiDAR point cloud sequences acquired by multiple successive scans. The encoding process of the point cloud sequence is performed on two different time scales. We first design a short-term motion-aware voxel encoding that captures the short-term temporal changes of point clouds driven by the motion of objects in each voxel. We also propose long-term motion-guided bird’s eye view (BEV) feature enhancement that adaptively aligns and aggregates the BEV feature maps obtained by the short-term voxel encoding by utilizing the dynamic motion context inferred from the sequence of the feature maps. The experiments conducted on the public nuScenes benchmark demonstrate that the proposed 3D object detector offers significant improvements in performance compared to the baseline methods and that it sets a state-of-the-art performance for certain 3D object detection categories. Code is available at https://github.com/HYjhkoh/MGTANet.git. | - |
dc.language | 영어 | - |
dc.language.iso | en | - |
dc.publisher | AAAI | - |
dc.title | MGTANet: Encoding Sequential LiDAR Points Using Long Short-Term Motion-Guided Temporal Attention for 3D Object Detection | - |
dc.type | Article | - |
dc.contributor.affiliatedAuthor | Choi, Jun Won | - |
dc.identifier.doi | 10.1609/aaai.v37i1.25200 | - |
dc.identifier.bibliographicCitation | AAAI Conference on Artificial Intelligence, v.37, no.1, pp.1179 - 1187 | - |
dc.relation.isPartOf | AAAI Conference on Artificial Intelligence | - |
dc.citation.title | AAAI Conference on Artificial Intelligence | - |
dc.citation.volume | 37 | - |
dc.citation.number | 1 | - |
dc.citation.startPage | 1179 | - |
dc.citation.endPage | 1187 | - |
dc.type.rims | ART | - |
dc.type.docType | Proceeding | - |
dc.description.journalClass | 3 | - |
dc.description.isOpenAccess | N | - |
dc.description.journalRegisteredClass | other | - |
dc.subject.keywordAuthor | CV | - |
dc.subject.keywordAuthor | Vision for Robotics & | - |
dc.subject.keywordAuthor | Autonomous Driving, CV | - |
dc.subject.keywordAuthor | Object Detection & | - |
dc.subject.keywordAuthor | Categorization | - |
dc.identifier.url | https://ojs.aaai.org/index.php/AAAI/article/view/25200 | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
222, Wangsimni-ro, Seongdong-gu, Seoul, 04763, Korea+82-2-2220-1365
COPYRIGHT © 2021 HANYANG UNIVERSITY.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.