Content vs. Context: Visual and Geographic Information Use in Video Landmark Retrieval
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Yin, Yifang | - |
dc.contributor.author | Seo, Beomjoo | - |
dc.contributor.author | Zimmermann, Roger | - |
dc.date.available | 2021-03-17T10:44:22Z | - |
dc.date.created | 2020-07-06 | - |
dc.date.issued | 2015-01 | - |
dc.identifier.issn | 1551-6857 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/hongik/handle/2020.sw.hongik/13721 | - |
dc.description.abstract | Due to the ubiquity of sensor-equipped smartphones, it has become increasingly feasible for users to capture videos together with associated geographic metadata, for example the location and the orientation of the camera. Such contextual information creates new opportunities for the organization and retrieval of geo-referenced videos. In this study we explore the task of landmark retrieval through the analysis of two types of state-of-the-art techniques, namely media-content-based and geocontext-based retrievals. For the content-based method, we choose the Spatial Pyramid Matching (SPM) approach combined with two advanced coding methods: Sparse Coding (SC) and Locality-Constrained Linear Coding (LLC). For the geo-based method, we present the Geo Landmark Visibility Determination (GeoLVD) approach which computes the visibility of a landmark based on intersections of a camera's field-of-view (FOV) and the landmark's geometric information available from Geographic Information Systems (GIS) and services. We first compare the retrieval results of the two methods, and discuss the strengths and weaknesses of each approach in terms of precision, recall and execution time. Next we analyze the factors that affect the effectiveness for the content-based and the geo-based methods, respectively. Finally we propose a hybrid retrieval method based on the integration of the visual (content) and geographic (context) information, which is shown to achieve significant improvements in our experiments. We believe that the results and observations in this work will enlighten the design of future geo-referenced video retrieval systems, improve our understanding of selecting the most appropriate visual features for indexing and searching, and help in selecting between the most suitable methods for retrieval based on different conditions. | - |
dc.language | 영어 | - |
dc.language.iso | en | - |
dc.publisher | ASSOC COMPUTING MACHINERY | - |
dc.title | Content vs. Context: Visual and Geographic Information Use in Video Landmark Retrieval | - |
dc.type | Article | - |
dc.contributor.affiliatedAuthor | Seo, Beomjoo | - |
dc.identifier.doi | 10.1145/2700287 | - |
dc.identifier.scopusid | 2-s2.0-84923328962 | - |
dc.identifier.wosid | 000349852500007 | - |
dc.identifier.bibliographicCitation | ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, v.11, no.3 | - |
dc.relation.isPartOf | ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS | - |
dc.citation.title | ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS | - |
dc.citation.volume | 11 | - |
dc.citation.number | 3 | - |
dc.type.rims | ART | - |
dc.type.docType | Article | - |
dc.description.journalClass | 1 | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Information Systems | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Software Engineering | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Theory & Methods | - |
dc.subject.keywordAuthor | Experimentation | - |
dc.subject.keywordAuthor | Content-based analysis | - |
dc.subject.keywordAuthor | geo-referenced videos | - |
dc.subject.keywordAuthor | landmark retrieval | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
94, Wausan-ro, Mapo-gu, Seoul, 04066, Korea02-320-1314
COPYRIGHT 2020 HONGIK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.