Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Boosting Monocular 3D Object Detection With Object-Centric Auxiliary Depth Supervision

Authors
Kim, YoungseokKim, SanminSim, SangminChoi, Jun WonKum, Dongsuk
Issue Date
Feb-2023
Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
Keywords
3D object detection; monocular image; auxiliary supervision; autonomous driving; deep learning
Citation
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, v.24, no.2, pp.1801 - 1813
Indexed
SCIE
SCOPUS
Journal Title
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS
Volume
24
Number
2
Start Page
1801
End Page
1813
URI
https://scholarworks.bwise.kr/hanyang/handle/2021.sw.hanyang/185118
DOI
10.1109/TITS.2022.3224082
ISSN
1524-9050
Abstract
Recent advances in monocular 3D detection leverage a depth estimation network explicitly as an intermediate stage of the 3D detection network. Depth map approaches yield more accurate depth to objects than other methods thanks to the depth estimation network trained on a large-scale dataset. However, depth map approaches can be limited by the accuracy of the depth map, and sequentially using two separated networks for depth estimation and 3D detection significantly increases computation cost and inference time. In this work, we propose a method to boost the RGB image-based 3D detector by jointly training the detection network with a depth prediction loss analogous to the depth estimation task. In this way, our 3D detection network can be supervised by more depth supervision from raw LiDAR points, which does not require any human annotation cost, to estimate accurate depth without explicitly predicting the depth map. Our novel object-centric depth prediction loss focuses on depth around foreground objects, which is important for 3D object detection, to leverage pixel-wise depth supervision in an object-centric manner. Our depth regression model is further trained to predict the uncertainty of depth to represent the 3D confidence of objects. To effectively train the 3D detector with raw LiDAR points and to enable end-to-end training, we revisit the regression target of 3D objects and design a network architecture. Extensive experiments on KITTI and nuScenes benchmarks show that our method can significantly boost the monocular image-based 3D detector to outperform depth map approaches while maintaining the real-time inference speed.
Files in This Item
Go to Link
Appears in
Collections
서울 공과대학 > 서울 전기공학전공 > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Choi, Jun Won photo

Choi, Jun Won
COLLEGE OF ENGINEERING (MAJOR IN ELECTRICAL ENGINEERING)
Read more

Altmetrics

Total Views & Downloads

BROWSE