Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Monitored Distillation for Positive Congruent Depth Completion

Full metadata record
DC Field Value Language
dc.contributor.authorLiu, T.Y.-
dc.contributor.authorAgrawal, P.-
dc.contributor.authorChen, A.-
dc.contributor.authorHong, Byung-Woo-
dc.contributor.authorWong, A.-
dc.date.accessioned2022-12-16T06:41:54Z-
dc.date.available2022-12-16T06:41:54Z-
dc.date.issued2022-10-
dc.identifier.issn0302-9743-
dc.identifier.issn1611-3349-
dc.identifier.urihttps://scholarworks.bwise.kr/cau/handle/2019.sw.cau/59679-
dc.description.abstractWe propose a method to infer a dense depth map from a single image, its calibration, and the associated sparse point cloud. In order to leverage existing models (teachers) that produce putative depth maps, we propose an adaptive knowledge distillation approach that yields a positive congruent training process, wherein a student model avoids learning the error modes of the teachers. In the absence of ground truth for model selection and training, our method, termed Monitored Distillation, allows a student to exploit a blind ensemble of teachers by selectively learning from predictions that best minimize the reconstruction error for a given image. Monitored Distillation yields a distilled depth map and a confidence map, or “monitor”, for how well a prediction from a particular teacher fits the observed image. The monitor adaptively weights the distilled depth where if all of the teachers exhibit high residuals, the standard unsupervised image reconstruction loss takes over as the supervisory signal. On indoor scenes (VOID), we outperform blind ensembling baselines by 17.53% and unsupervised methods by 24.25%; we boast a 79% model size reduction while maintaining comparable performance to the best supervised method. For outdoors (KITTI), we tie for 5th overall on the benchmark despite not using ground truth. Code available at: https://github.com/alexklwong/mondi-python. © 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.-
dc.format.extent19-
dc.language영어-
dc.language.isoENG-
dc.publisherSpringer Science and Business Media Deutschland GmbH-
dc.titleMonitored Distillation for Positive Congruent Depth Completion-
dc.typeArticle-
dc.identifier.doi10.1007/978-3-031-20086-1_3-
dc.identifier.bibliographicCitationLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), v.13662 LNCS, pp 35 - 53-
dc.description.isOpenAccessN-
dc.identifier.wosid000899248700003-
dc.identifier.scopusid2-s2.0-85142749457-
dc.citation.endPage53-
dc.citation.startPage35-
dc.citation.titleLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)-
dc.citation.volume13662 LNCS-
dc.type.docTypeProceedings Paper-
dc.publisher.location미국-
dc.subject.keywordAuthorBlind ensemble-
dc.subject.keywordAuthorDepth completion-
dc.subject.keywordAuthorKnowledge distillation-
dc.relation.journalResearchAreaComputer Science-
dc.relation.journalResearchAreaImaging Science & Photographic Technology-
dc.relation.journalWebOfScienceCategoryComputer Science, Artificial Intelligence-
dc.relation.journalWebOfScienceCategoryImaging Science & Photographic Technology-
dc.description.journalRegisteredClassscopus-
Files in This Item
There are no files associated with this item.
Appears in
Collections
College of Software > Department of Artificial Intelligence > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Hong, Byung-Woo photo

Hong, Byung-Woo
소프트웨어대학 (AI학과)
Read more

Altmetrics

Total Views & Downloads

BROWSE