Multi-scale contrast and relative motion-based key frame extraction
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Ejaz, Naveed | - |
dc.contributor.author | Baik, Sung Wook | - |
dc.contributor.author | Majeed, Hammad | - |
dc.contributor.author | Chang, Hangbae | - |
dc.contributor.author | Mehmood, Irfan | - |
dc.date.available | 2019-03-07T04:39:09Z | - |
dc.date.issued | 2018-06 | - |
dc.identifier.issn | 1687-5281 | - |
dc.identifier.issn | 1687-5281 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/cau/handle/2019.sw.cau/2066 | - |
dc.description.abstract | The huge amount of video data available these days requires effective management techniques for storage, indexing, and retrieval. Video summarization, a method to manage video data, provides concise versions of the videos for efficient browsing and retrieval. Key frame extraction is a form of video summarization which selects only the most salient frames from a given video. Since the automatic semantic understanding of the video contents is not possible so far, most of the existing works employ low level index features for extracting key frames. However, the usage of low level features results in loss of semantic details, thus leading to a semantic gap. In this context, the saliency-based user attention modeling technique can be used to bridge this semantic gap. In this paper, a key frame extraction scheme based on a visual attention mechanism is proposed. The proposed scheme builds static visual attention method based on multi-scale contrast instead of usual color contrast. The dynamic visual attention model is developed based on novel relative motion intensity and relative motion orientation. An efficient fusion scheme for combining three visual attention values is then proposed. A flexible technique is then used for key frame extraction. The experimental results demonstrate that the proposed mechanism provides excellent results as compared to the some of the other prominent techniques in the literature. | - |
dc.language | 영어 | - |
dc.language.iso | ENG | - |
dc.publisher | SPRINGER INTERNATIONAL PUBLISHING AG | - |
dc.title | Multi-scale contrast and relative motion-based key frame extraction | - |
dc.type | Article | - |
dc.identifier.doi | 10.1186/s13640-018-0280-z | - |
dc.identifier.bibliographicCitation | EURASIP JOURNAL ON IMAGE AND VIDEO PROCESSING, v.2018, no.1 | - |
dc.description.isOpenAccess | Y | - |
dc.identifier.wosid | 000435024700001 | - |
dc.identifier.scopusid | 2-s2.0-85048212314 | - |
dc.citation.number | 1 | - |
dc.citation.title | EURASIP JOURNAL ON IMAGE AND VIDEO PROCESSING | - |
dc.citation.volume | 2018 | - |
dc.type.docType | Article | - |
dc.publisher.location | 스위스 | - |
dc.subject.keywordAuthor | Key frame extraction | - |
dc.subject.keywordAuthor | Video summarization | - |
dc.subject.keywordAuthor | Visual saliency | - |
dc.subject.keywordAuthor | Visual attention model | - |
dc.subject.keywordAuthor | Fusion mechanism | - |
dc.subject.keywordAuthor | Video summary evaluation | - |
dc.subject.keywordPlus | ATTENTION MODEL | - |
dc.subject.keywordPlus | VIDEO | - |
dc.subject.keywordPlus | SELECTION | - |
dc.relation.journalResearchArea | Engineering | - |
dc.relation.journalResearchArea | Imaging Science & Photographic Technology | - |
dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
dc.relation.journalWebOfScienceCategory | Imaging Science & Photographic Technology | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
84, Heukseok-ro, Dongjak-gu, Seoul, Republic of Korea (06974)02-820-6194
COPYRIGHT 2019 Chung-Ang University All Rights Reserved.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.