Upright and stabilized omnidirectional depth estimation for wide-baseline multi-camera inertial systems
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Won, Changhee | - |
dc.contributor.author | Seok, Hochang | - |
dc.contributor.author | Lim, Jongwoo | - |
dc.date.accessioned | 2022-07-08T02:07:26Z | - |
dc.date.available | 2022-07-08T02:07:26Z | - |
dc.date.created | 2021-05-11 | - |
dc.date.issued | 2020-06 | - |
dc.identifier.issn | 2160-7508 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/hanyang/handle/2021.sw.hanyang/145642 | - |
dc.description.abstract | This paper presents an upright and stabilized omnidirectional depth estimation for an arbitrarily rotated wide- baseline multi-camera inertial system. By aligning the reference rig coordinate system with the gravity direction acquired from an inertial measurement unit, we sample depth hypotheses for omnidirectional stereo matching by sweeping global spheres whose equators are parallel to the ground plane. Then, unary features extracted from each input image by 2D convolutional neural networks (CNN) are warped onto the swept spheres, and the final omnidirectional depth map is output through cost computation by a 3D CNN-based hourglass module and a softargmax operation. This can eliminate wavy or unrecognizable visual artifacts in equirectangular depth maps which can cause failures in scene understanding. We show the capability of our upright and stabilized omnidirectional depth estimation through experiments on real data. | - |
dc.language | 영어 | - |
dc.language.iso | en | - |
dc.publisher | IEEE Computer Society | - |
dc.title | Upright and stabilized omnidirectional depth estimation for wide-baseline multi-camera inertial systems | - |
dc.type | Article | - |
dc.contributor.affiliatedAuthor | Lim, Jongwoo | - |
dc.identifier.doi | 10.1109/CVPRW50498.2020.00324 | - |
dc.identifier.scopusid | 2-s2.0-85090149653 | - |
dc.identifier.bibliographicCitation | IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, v.2020-June, pp.2689 - 2692 | - |
dc.relation.isPartOf | IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops | - |
dc.citation.title | IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops | - |
dc.citation.volume | 2020-June | - |
dc.citation.startPage | 2689 | - |
dc.citation.endPage | 2692 | - |
dc.type.rims | ART | - |
dc.type.docType | Conference Paper | - |
dc.description.journalClass | 1 | - |
dc.description.isOpenAccess | N | - |
dc.description.journalRegisteredClass | scopus | - |
dc.subject.keywordPlus | Cameras | - |
dc.subject.keywordPlus | Computer vision | - |
dc.subject.keywordPlus | Convolutional neural networks | - |
dc.subject.keywordPlus | Co-ordinate system | - |
dc.subject.keywordPlus | Depth Estimation | - |
dc.subject.keywordPlus | Inertial measurement unit | - |
dc.subject.keywordPlus | Inertial systems | - |
dc.subject.keywordPlus | Scene understanding | - |
dc.subject.keywordPlus | Stereo matching | - |
dc.subject.keywordPlus | Unary features | - |
dc.subject.keywordPlus | Visual artifacts | - |
dc.subject.keywordPlus | Stereo image processing | - |
dc.identifier.url | https://ieeexplore.ieee.org/document/9150951 | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
222, Wangsimni-ro, Seongdong-gu, Seoul, 04763, Korea+82-2-2220-1365
COPYRIGHT © 2021 HANYANG UNIVERSITY.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.