Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

MGTANet: Encoding Sequential LiDAR Points Using Long Short-Term Motion-Guided Temporal Attention for 3D Object Detection

Full metadata record
DC Field Value Language
dc.contributor.authorKoh, Junho-
dc.contributor.authorLee, Junhyung-
dc.contributor.authorLee, Youngwoo-
dc.contributor.authorKim, Jaekyum-
dc.contributor.authorChoi, Jun Won-
dc.date.accessioned2023-09-11T01:54:41Z-
dc.date.available2023-09-11T01:54:41Z-
dc.date.created2023-07-20-
dc.date.issued2023-02-
dc.identifier.urihttps://scholarworks.bwise.kr/hanyang/handle/2021.sw.hanyang/190401-
dc.description.abstractMost scanning LiDAR sensors generate a sequence of point clouds in real-time. While conventional 3D object detectors use a set of unordered LiDAR points acquired over a fixed time interval, recent studies have revealed that substantial performance improvement can be achieved by exploiting the spatio-temporal context present in a sequence of LiDAR point sets. In this paper, we propose a novel 3D object detection architecture, which can encode LiDAR point cloud sequences acquired by multiple successive scans. The encoding process of the point cloud sequence is performed on two different time scales. We first design a short-term motion-aware voxel encoding that captures the short-term temporal changes of point clouds driven by the motion of objects in each voxel. We also propose long-term motion-guided bird’s eye view (BEV) feature enhancement that adaptively aligns and aggregates the BEV feature maps obtained by the short-term voxel encoding by utilizing the dynamic motion context inferred from the sequence of the feature maps. The experiments conducted on the public nuScenes benchmark demonstrate that the proposed 3D object detector offers significant improvements in performance compared to the baseline methods and that it sets a state-of-the-art performance for certain 3D object detection categories. Code is available at https://github.com/HYjhkoh/MGTANet.git.-
dc.language영어-
dc.language.isoen-
dc.publisherAAAI-
dc.titleMGTANet: Encoding Sequential LiDAR Points Using Long Short-Term Motion-Guided Temporal Attention for 3D Object Detection-
dc.typeArticle-
dc.contributor.affiliatedAuthorChoi, Jun Won-
dc.identifier.doi10.1609/aaai.v37i1.25200-
dc.identifier.bibliographicCitationAAAI Conference on Artificial Intelligence, v.37, no.1, pp.1179 - 1187-
dc.relation.isPartOfAAAI Conference on Artificial Intelligence-
dc.citation.titleAAAI Conference on Artificial Intelligence-
dc.citation.volume37-
dc.citation.number1-
dc.citation.startPage1179-
dc.citation.endPage1187-
dc.type.rimsART-
dc.type.docTypeProceeding-
dc.description.journalClass3-
dc.description.isOpenAccessN-
dc.description.journalRegisteredClassother-
dc.subject.keywordAuthorCV-
dc.subject.keywordAuthorVision for Robotics &amp-
dc.subject.keywordAuthorAutonomous Driving, CV-
dc.subject.keywordAuthorObject Detection &amp-
dc.subject.keywordAuthorCategorization-
dc.identifier.urlhttps://ojs.aaai.org/index.php/AAAI/article/view/25200-
Files in This Item
Go to Link
Appears in
Collections
서울 공과대학 > 서울 전기공학전공 > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Choi, Jun Won photo

Choi, Jun Won
COLLEGE OF ENGINEERING (MAJOR IN ELECTRICAL ENGINEERING)
Read more

Altmetrics

Total Views & Downloads

BROWSE