Video Extrapolation Using Neighboring Frames
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Lee, Sangwoo | - |
dc.contributor.author | Lee, Jungjin | - |
dc.contributor.author | Kim, Bumki | - |
dc.contributor.author | Kim, Kyehyun | - |
dc.contributor.author | Noh, Junyong | - |
dc.date.available | 2021-03-11T02:40:10Z | - |
dc.date.created | 2021-03-11 | - |
dc.date.issued | 2019-06 | - |
dc.identifier.issn | 0730-0301 | - |
dc.identifier.uri | http://scholarworks.bwise.kr/ssu/handle/2018.sw.ssu/40655 | - |
dc.description.abstract | With the popularity of immersive display systems that fill the viewer's field of view (FOV) entirely, demand for wide FOV content has increased. A video extrapolation technique based on reuse of existing videos is one of the most efficient ways to produce wide FOV content. Extrapolating a video poses a great challenge, however, due to the insufficient amount of cues and information that can be leveraged for the estimation of the extended region. This article introduces a novel framework that allows the extrapolation of an input video and consequently converts a conventional content into one with wide FOV. The key idea of the proposed approach is to integrate the information from all frames in the input video into each frame. Utilizing the information from all frames is crucial because it is very difficult to achieve the goal with a two-dimensional transformation based approach when parallax caused by camera motion is apparent. Warping guided by three-dimensnional scene points matches the viewpoints between the different frames. The matched frames are blended to create extended views. Various experiments demonstrate that the results of the proposed method are more visually plausible than those produced using state-of-the-art techniques. | - |
dc.language | 영어 | - |
dc.language.iso | en | - |
dc.publisher | ASSOC COMPUTING MACHINERY | - |
dc.relation.isPartOf | ACM TRANSACTIONS ON GRAPHICS | - |
dc.title | Video Extrapolation Using Neighboring Frames | - |
dc.type | Article | - |
dc.identifier.doi | 10.1145/3196492 | - |
dc.type.rims | ART | - |
dc.identifier.bibliographicCitation | ACM TRANSACTIONS ON GRAPHICS, v.38, no.3 | - |
dc.description.journalClass | 1 | - |
dc.identifier.wosid | 000495415600002 | - |
dc.citation.number | 3 | - |
dc.citation.title | ACM TRANSACTIONS ON GRAPHICS | - |
dc.citation.volume | 38 | - |
dc.contributor.affiliatedAuthor | Lee, Jungjin | - |
dc.type.docType | Article | - |
dc.description.isOpenAccess | N | - |
dc.subject.keywordAuthor | Peripheral vision | - |
dc.subject.keywordAuthor | immersive content | - |
dc.subject.keywordAuthor | video extrapolation | - |
dc.subject.keywordPlus | PANORAMIC VIDEO | - |
dc.subject.keywordPlus | IMAGE | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Software Engineering | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
Soongsil University Library 369 Sangdo-Ro, Dongjak-Gu, Seoul, Korea (06978)02-820-0733
COPYRIGHT ⓒ SOONGSIL UNIVERSITY, ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.