자율주행차량을 위한 CycleGAN 기반 Depth Completion 기법
DC Field | Value | Language |
---|---|---|
dc.contributor.author | 응 웬민찌 | - |
dc.contributor.author | 유명식 | - |
dc.date.accessioned | 2023-01-09T02:40:07Z | - |
dc.date.available | 2023-01-09T02:40:07Z | - |
dc.date.created | 2022-12-22 | - |
dc.date.issued | 2022-05 | - |
dc.identifier.issn | 1226-4717 | - |
dc.identifier.uri | http://scholarworks.bwise.kr/ssu/handle/2018.sw.ssu/43005 | - |
dc.description.abstract | Depth completion is a challenging task supporting the purpose of scene understanding and environment perception in an autonomous vehicle. The existing method considered multiple modals input such as RGB images and depth LIDAR images to utilize the complementary characteristics of those two sensors. However, traditional autoencoder approaches have shown limitations in representing the data in low dimensional space. Moreover, depth discontinuity also happened when fusing the camera image and LIDAR image due to the light sensitivity in the RGB image. In our study, we are adapting CycleGAN focusing on learning the distribution of the data rather than the pixel density to reconstruct the depth into dense one. We also consider the semantic segmentation as additional input to mitigate the depth discontinuity problem. Our framework is trained and evaluated on the KITTI benchmark with synchronized data capturing various road scenery. The experimental results prove the proposed framework to be competitive performance and efficient in depth completion task. | - |
dc.language | 영어 | - |
dc.language.iso | en | - |
dc.publisher | 한국통신학회 | - |
dc.relation.isPartOf | 한국통신학회논문지 | - |
dc.title | 자율주행차량을 위한 CycleGAN 기반 Depth Completion 기법 | - |
dc.title.alternative | CycleGAN-Based Depth Completion for Autonomous Vehicles | - |
dc.type | Article | - |
dc.identifier.doi | 10.7840/kics.2022.47.5.781 | - |
dc.type.rims | ART | - |
dc.identifier.bibliographicCitation | 한국통신학회논문지, v.47, no.5, pp.781 - 788 | - |
dc.identifier.kciid | ART002840604 | - |
dc.description.journalClass | 2 | - |
dc.citation.endPage | 788 | - |
dc.citation.number | 5 | - |
dc.citation.startPage | 781 | - |
dc.citation.title | 한국통신학회논문지 | - |
dc.citation.volume | 47 | - |
dc.contributor.affiliatedAuthor | 유명식 | - |
dc.identifier.url | https://www.dbpia.co.kr/journal/articleDetail?nodeId=NODE11064381 | - |
dc.description.isOpenAccess | N | - |
dc.subject.keywordAuthor | depth completion | - |
dc.subject.keywordAuthor | cycleGAN | - |
dc.subject.keywordAuthor | semantic segmentation | - |
dc.subject.keywordAuthor | autonomous vehicle | - |
dc.subject.keywordAuthor | sensor fusion | - |
dc.description.journalRegisteredClass | kci | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
Soongsil University Library 369 Sangdo-Ro, Dongjak-Gu, Seoul, Korea (06978)02-820-0733
COPYRIGHT ⓒ SOONGSIL UNIVERSITY, ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.