Reconstruct as Far as You Can: Consensus of Non-Rigid Reconstruction from Feasible Regions
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Cha, Geonho | - |
dc.contributor.author | Lee, Minsik | - |
dc.contributor.author | Cho, Jungchan | - |
dc.contributor.author | Oh, Songwai | - |
dc.date.accessioned | 2021-06-22T04:43:49Z | - |
dc.date.available | 2021-06-22T04:43:49Z | - |
dc.date.issued | 2021-02 | - |
dc.identifier.issn | 0162-8828 | - |
dc.identifier.issn | 1939-3539 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/erica/handle/2021.sw.erica/654 | - |
dc.description.abstract | Much progress has been made for non-rigid structure from motion (NRSfM) during the last two decades, which made it possible to provide reasonable solutions for synthetically-created benchmark data. In order to utilize these NRSfM techniques in more realistic situations, however, we are now facing two important problems that must be solved: First, general scenes contain complex deformations as well as multiple objects, which violates the usual assumptions of previous NRSfM proposals. Second, there are many unreconstructable regions in the video, either because of the discontinued tracks of 2D trajectories or those regions static towards the camera, which require careful manipulations. In this paper, we show that a consensus-based reconstruction framework can handle these issues effectively. Even though the entire scene is complex, its parts usually have simpler deformations, and even though there are some unreconstructable parts, they can be weeded out to reduce their harmful effect on the entire reconstruction. The main difficulty of this approach lies in identifying appropriate parts, however, it can be effectively avoided by sampling parts stochastically and then aggregate their reconstructions afterwards. Experimental results show that the proposed method renews the state-of-the-art for popular benchmark data under much harsher environments, i.e., narrow camera view ranges, and it can reconstruct video-based real-world data effectively for as many areas as it can without an elaborated user input. © 1979-2012 IEEE. | - |
dc.format.extent | 15 | - |
dc.language | 영어 | - |
dc.language.iso | ENG | - |
dc.publisher | IEEE Computer Society | - |
dc.title | Reconstruct as Far as You Can: Consensus of Non-Rigid Reconstruction from Feasible Regions | - |
dc.type | Article | - |
dc.publisher.location | 미국 | - |
dc.identifier.doi | 10.1109/TPAMI.2019.2931317 | - |
dc.identifier.scopusid | 2-s2.0-85099392976 | - |
dc.identifier.wosid | 000607383300016 | - |
dc.identifier.bibliographicCitation | IEEE Transactions on Pattern Analysis and Machine Intelligence, v.43, no.2, pp 623 - 637 | - |
dc.citation.title | IEEE Transactions on Pattern Analysis and Machine Intelligence | - |
dc.citation.volume | 43 | - |
dc.citation.number | 2 | - |
dc.citation.startPage | 623 | - |
dc.citation.endPage | 637 | - |
dc.type.docType | Article | - |
dc.description.isOpenAccess | N | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalResearchArea | Engineering | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Artificial Intelligence | - |
dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
dc.subject.keywordPlus | PROCRUSTEAN NORMAL-DISTRIBUTION | - |
dc.subject.keywordPlus | STRUCTURE-FROM-MOTION | - |
dc.subject.keywordPlus | 3D SHAPERE | - |
dc.subject.keywordPlus | COVERY | - |
dc.subject.keywordAuthor | Trajectory | - |
dc.subject.keywordAuthor | StrainBench | - |
dc.subject.keywordAuthor | mark testing | - |
dc.subject.keywordAuthor | Structure from motion | - |
dc.subject.keywordAuthor | Two dimensional displays | - |
dc.subject.keywordAuthor | Cameras | - |
dc.subject.keywordAuthor | ShapePart-based reconstruction | - |
dc.subject.keywordAuthor | stochastic stitching | - |
dc.subject.keywordAuthor | non-rigid structure from motion | - |
dc.subject.keywordAuthor | structure from motion | - |
dc.identifier.url | https://ieeexplore.ieee.org/document/8778692 | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
55 Hanyangdeahak-ro, Sangnok-gu, Ansan, Gyeonggi-do, 15588, Korea+82-31-400-4269 sweetbrain@hanyang.ac.kr
COPYRIGHT © 2021 HANYANG UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.