Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Dynamic Video Deblurring Using a Locally Adaptive Blur Model

Full metadata record
DC Field Value Language
dc.contributor.authorKim, Tae Hyun-
dc.contributor.authorNah, Seungjun-
dc.contributor.authorLee, Kyoung Mu-
dc.date.accessioned2022-07-11T05:18:02Z-
dc.date.available2022-07-11T05:18:02Z-
dc.date.created2021-05-14-
dc.date.issued2018-10-
dc.identifier.issn0162-8828-
dc.identifier.urihttps://scholarworks.bwise.kr/hanyang/handle/2021.sw.hanyang/149167-
dc.description.abstractState-of-the-art video deblurring methods cannot handle blurry videos recorded in dynamic scenes since they are built under a strong assumption that the captured scenes are static. Contrary to the existing methods, we propose a new video deblurring algorithm that can deal with general blurs inherent in dynamic scenes. To handle general and locally varying blurs caused by various sources, such as moving objects, camera shake, depth variation, and defocus, we estimate pixel-wise varying non-uniform blur kernels. We infer bidirectional optical flows to handle motion blurs, and also estimate Gaussian blur maps to remove optical blur from defocus. Therefore, we propose a single energy model that jointly estimates optical flows, defocus blur maps and latent frames. We also provide a framework and efficient solvers to minimize the proposed energy model. By optimizing the energy model, we achieve significant improvements in removing general blurs, estimating optical flows, and extending depth-of-field in blurry frames. Moreover, in this work, to evaluate the performance of non-uniform deblurring methods objectively, we have constructed a new realistic dataset with ground truths. In addition, extensive experimental results on publicly available challenging videos demonstrate that the proposed method produces qualitatively superior performance than the state-of-the-art methods which often fail in either deblurring or optical flow estimation.-
dc.language영어-
dc.language.isoen-
dc.publisherIEEE COMPUTER SOC-
dc.titleDynamic Video Deblurring Using a Locally Adaptive Blur Model-
dc.typeArticle-
dc.contributor.affiliatedAuthorKim, Tae Hyun-
dc.identifier.doi10.1109/TPAMI.2017.2761348-
dc.identifier.scopusid2-s2.0-85031820870-
dc.identifier.wosid000443875500007-
dc.identifier.bibliographicCitationIEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, v.40, no.10, pp.2374 - 2387-
dc.relation.isPartOfIEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE-
dc.citation.titleIEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE-
dc.citation.volume40-
dc.citation.number10-
dc.citation.startPage2374-
dc.citation.endPage2387-
dc.type.rimsART-
dc.type.docType정기학술지(Article(Perspective Article포함))-
dc.description.journalClass1-
dc.description.isOpenAccessN-
dc.description.journalRegisteredClassscie-
dc.description.journalRegisteredClassscopus-
dc.relation.journalResearchAreaComputer Science-
dc.relation.journalResearchAreaEngineering-
dc.relation.journalWebOfScienceCategoryComputer Science, Artificial Intelligence-
dc.relation.journalWebOfScienceCategoryEngineering, Electrical & Electronic-
dc.subject.keywordPlusCAMERA SHAKE-
dc.subject.keywordPlusOPTICAL-FLOW-
dc.subject.keywordPlusSINGLE-
dc.subject.keywordAuthorVideo deblurring-
dc.subject.keywordAuthornon-uniform blur-
dc.subject.keywordAuthormotion blur-
dc.subject.keywordAuthordefocus blur-
dc.subject.keywordAuthoroptical flow-
dc.subject.keywordAuthorGaussian blur map-
dc.subject.keywordAuthornon-uniform blur dataset-
dc.identifier.urlhttps://ieeexplore.ieee.org/document/8063973-
Files in This Item
Go to Link
Appears in
Collections
서울 공과대학 > 서울 컴퓨터소프트웨어학부 > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Kim, Tae Hyun photo

Kim, Tae Hyun
COLLEGE OF ENGINEERING (SCHOOL OF COMPUTER SCIENCE)
Read more

Altmetrics

Total Views & Downloads

BROWSE