Data-Driven Video Scene Importance Estimation for Adaptive Video Streaming
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Choi,Wangyu | - |
dc.contributor.author | Yoon, Jongwon | - |
dc.date.accessioned | 2024-09-23T07:30:24Z | - |
dc.date.available | 2024-09-23T07:30:24Z | - |
dc.date.issued | 2024-07 | - |
dc.identifier.issn | 2165-8528 | - |
dc.identifier.issn | 2165-8536 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/erica/handle/2021.sw.erica/120533 | - |
dc.description.abstract | Recently, many video streaming services have adopted adaptive bitrate algorithms as their optimization algorithm. Traditionally, ABR algorithms strive to provide an accurate estimate of network conditions. In recent years, ABR algorithms have incorporated the content of interest of the video into the algorithm based on the fact that users are interested in certain segments of the video when watching, and they have achieved significant performance improvements. However, these efforts are expensive in terms of time and cost and are difficult to adapt to new videos. To overcome these limitations, we propose a system for estimating scene saliency for new videos. To do so, we first build a dataset from a large-scale video streaming service, which is then trained on a deep learning model consisting of a 3D CNN and a Transformer. As a result, our proposed model achieves a significantly lower prediction error rate on unseen videos and achieves a significant QoE improvement when incorporated with the ABR algorithm. © 2024 IEEE. | - |
dc.format.extent | 4 | - |
dc.language | 영어 | - |
dc.language.iso | ENG | - |
dc.publisher | IEEE Computer Society | - |
dc.title | Data-Driven Video Scene Importance Estimation for Adaptive Video Streaming | - |
dc.type | Article | - |
dc.publisher.location | 미국 | - |
dc.identifier.doi | 10.1109/ICUFN61752.2024.10624977 | - |
dc.identifier.scopusid | 2-s2.0-85202733731 | - |
dc.identifier.wosid | 001307336600076 | - |
dc.identifier.bibliographicCitation | 2024 Fifteenth International Conference on Ubiquitous and Future Networks (ICUFN), pp 348 - 351 | - |
dc.citation.title | 2024 Fifteenth International Conference on Ubiquitous and Future Networks (ICUFN) | - |
dc.citation.startPage | 348 | - |
dc.citation.endPage | 351 | - |
dc.type.docType | Proceedings Paper | - |
dc.description.isOpenAccess | N | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Engineering | - |
dc.relation.journalResearchArea | Telecommunications | - |
dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
dc.relation.journalWebOfScienceCategory | Telecommunications | - |
dc.subject.keywordAuthor | Adaptive video streaming | - |
dc.subject.keywordAuthor | Deep learning | - |
dc.subject.keywordAuthor | Scene importance estimation | - |
dc.identifier.url | https://ieeexplore.ieee.org/document/10624977 | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
55 Hanyangdeahak-ro, Sangnok-gu, Ansan, Gyeonggi-do, 15588, Korea+82-31-400-4269 sweetbrain@hanyang.ac.kr
COPYRIGHT © 2021 HANYANG UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.