Prediction of head movement in 360-degree videos using attention model
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Lee, Dongwon | - |
dc.contributor.author | Choi, Minji | - |
dc.contributor.author | Lee, Joohyun | - |
dc.date.accessioned | 2021-11-08T04:34:38Z | - |
dc.date.available | 2021-11-08T04:34:38Z | - |
dc.date.issued | 2021-06 | - |
dc.identifier.issn | 1424-8220 | - |
dc.identifier.issn | 1424-3210 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/erica/handle/2021.sw.erica/106210 | - |
dc.description.abstract | In this paper, we propose a prediction algorithm, the combination of Long Short-Term Memory (LSTM) and attention model, based on machine learning models to predict the vision coordinates when watching 360-degree videos in a Virtual Reality (VR) or Augmented Reality (AR) system. Predicting the vision coordinates while video streaming is important when the network condition is degraded. However, the traditional prediction models such as Moving Average (MA) and Autoregression Moving Average (ARMA) are linear so they cannot consider the nonlinear relationship. Therefore, machine learning models based on deep learning are recently used for nonlinear predictions. We use the Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) neural network methods, originated in Recurrent Neural Networks (RNN), and predict the head position in the 360-degree videos. Therefore, we adopt the attention model to LSTM to make more accurate results. We also compare the performance of the proposed model with the other machine learning models such as Multi-Layer Perceptron (MLP) and RNN using the root mean squared error (RMSE) of predicted and real coordinates. We demonstrate that our model can predict the vision coordinates more accurately than the other models in various videos. | - |
dc.format.extent | 22 | - |
dc.language | 영어 | - |
dc.language.iso | ENG | - |
dc.publisher | Multidisciplinary Digital Publishing Institute (MDPI) | - |
dc.title | Prediction of head movement in 360-degree videos using attention model | - |
dc.type | Article | - |
dc.publisher.location | 스위스 | - |
dc.identifier.doi | 10.3390/s21113678 | - |
dc.identifier.scopusid | 2-s2.0-85106399523 | - |
dc.identifier.wosid | 000660668700001 | - |
dc.identifier.bibliographicCitation | Sensors, v.21, no.11, pp 1 - 22 | - |
dc.citation.title | Sensors | - |
dc.citation.volume | 21 | - |
dc.citation.number | 11 | - |
dc.citation.startPage | 1 | - |
dc.citation.endPage | 22 | - |
dc.type.docType | Article | - |
dc.description.isOpenAccess | Y | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Chemistry | - |
dc.relation.journalResearchArea | Engineering | - |
dc.relation.journalResearchArea | Instruments & Instrumentation | - |
dc.relation.journalWebOfScienceCategory | Chemistry, Analytical | - |
dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
dc.relation.journalWebOfScienceCategory | Instruments & Instrumentation | - |
dc.subject.keywordAuthor | LSTM | - |
dc.subject.keywordAuthor | GRU | - |
dc.subject.keywordAuthor | head movement | - |
dc.subject.keywordAuthor | time-series prediction | - |
dc.subject.keywordAuthor | machine learning | - |
dc.subject.keywordAuthor | attention model | - |
dc.identifier.url | https://www.mdpi.com/1424-8220/21/11/3678 | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
55 Hanyangdeahak-ro, Sangnok-gu, Ansan, Gyeonggi-do, 15588, Korea+82-31-400-4269 sweetbrain@hanyang.ac.kr
COPYRIGHT © 2021 HANYANG UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.