Test-Time Adaptation for Video Frame Interpolation via Meta-Learning
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Choi, Myungsub | - |
dc.contributor.author | Choi, Janghoon | - |
dc.contributor.author | Baik, Sungyong | - |
dc.contributor.author | Kim, Tae Hyun | - |
dc.contributor.author | Lee, Kyoung Mu | - |
dc.date.accessioned | 2022-12-20T04:59:26Z | - |
dc.date.available | 2022-12-20T04:59:26Z | - |
dc.date.created | 2022-12-07 | - |
dc.date.issued | 2022-12 | - |
dc.identifier.issn | 0162-8828 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/hanyang/handle/2021.sw.hanyang/172787 | - |
dc.description.abstract | Video frame interpolation is a challenging problem that involves various scenarios depending on the variety of foreground and background motions, frame rate, and occlusion. Therefore, generalizing across different scenes is difficult for a single network with fixed parameters. Ideally, one could have a different network for each scenario, but this will be computationally infeasible for practical applications. In this work, we propose MetaVFI, an adaptive video frame interpolation algorithm that uses additional information readily available at test time but has not been exploited in previous works. We initially show the benefits of test-time adaptation through simple fine-tuning of a network and then greatly improve its efficiency by incorporating meta-learning. Thus, we obtain significant performance gains with only a single gradient update without introducing any additional parameters. Moreover, the proposed MetaVFI algorithm is model-agnostic which can be easily combined with any video frame interpolation network. We show that our adaptive framework greatly improves the performance of baseline video frame interpolation networks on multiple benchmark datasets. | - |
dc.language | 영어 | - |
dc.language.iso | en | - |
dc.publisher | IEEE COMPUTER SOC | - |
dc.title | Test-Time Adaptation for Video Frame Interpolation via Meta-Learning | - |
dc.type | Article | - |
dc.contributor.affiliatedAuthor | Kim, Tae Hyun | - |
dc.identifier.doi | 10.1109/TPAMI.2021.3129819 | - |
dc.identifier.scopusid | 2-s2.0-85120083066 | - |
dc.identifier.wosid | 000880661400075 | - |
dc.identifier.bibliographicCitation | IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, v.44, no.12, pp.9615 - 9628 | - |
dc.relation.isPartOf | IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE | - |
dc.citation.title | IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE | - |
dc.citation.volume | 44 | - |
dc.citation.number | 12 | - |
dc.citation.startPage | 9615 | - |
dc.citation.endPage | 9628 | - |
dc.type.rims | ART | - |
dc.type.docType | Article | - |
dc.description.journalClass | 1 | - |
dc.description.isOpenAccess | N | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalResearchArea | Engineering | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Artificial Intelligence | - |
dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
dc.subject.keywordPlus | Benchmarking | - |
dc.subject.keywordPlus | Computer vision | - |
dc.subject.keywordPlus | Job analysis | - |
dc.subject.keywordPlus | Adaptation models | - |
dc.subject.keywordPlus | Frame interpolation | - |
dc.subject.keywordPlus | Images synthesis | - |
dc.subject.keywordPlus | MAML | - |
dc.subject.keywordPlus | Metalearning | - |
dc.subject.keywordPlus | Performance Gain | - |
dc.subject.keywordPlus | Self-supervision | - |
dc.subject.keywordPlus | Slow motion | - |
dc.subject.keywordPlus | Superresolution | - |
dc.subject.keywordPlus | Task analysis | - |
dc.subject.keywordPlus | Test time | - |
dc.subject.keywordPlus | Test-time adaptation | - |
dc.subject.keywordPlus | Video frame | - |
dc.subject.keywordPlus | Video frame interpolation | - |
dc.subject.keywordPlus | Agnostic | - |
dc.subject.keywordPlus | algorithm | - |
dc.subject.keywordPlus | article | - |
dc.subject.keywordPlus | human | - |
dc.subject.keywordPlus | learning | - |
dc.subject.keywordPlus | videorecording | - |
dc.subject.keywordPlus | Interpolation | - |
dc.subject.keywordAuthor | Interpolation | - |
dc.subject.keywordAuthor | Adaptation models | - |
dc.subject.keywordAuthor | Estimation | - |
dc.subject.keywordAuthor | Task analysis | - |
dc.subject.keywordAuthor | Visualization | - |
dc.subject.keywordAuthor | Superresolution | - |
dc.subject.keywordAuthor | Performance gain | - |
dc.subject.keywordAuthor | Video frame interpolation | - |
dc.subject.keywordAuthor | test-time adaptation | - |
dc.subject.keywordAuthor | meta-learning | - |
dc.subject.keywordAuthor | slow motion | - |
dc.subject.keywordAuthor | self-supervision | - |
dc.subject.keywordAuthor | image synthesis | - |
dc.subject.keywordAuthor | MAML | - |
dc.identifier.url | https://ieeexplore.ieee.org/document/9625774 | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
222, Wangsimni-ro, Seongdong-gu, Seoul, 04763, Korea+82-2-2220-1365
COPYRIGHT © 2021 HANYANG UNIVERSITY.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.