Automatic Generation of Multimodal 4D Effects for Immersive Video Watching Experiences
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Nam, Seoyong | - |
dc.contributor.author | Chung, Minho | - |
dc.contributor.author | Kim, Haerim | - |
dc.contributor.author | Kim, Eunchae | - |
dc.contributor.author | Kim, Taehyeon | - |
dc.contributor.author | Yoo, Yongjae | - |
dc.date.accessioned | 2024-12-05T06:00:24Z | - |
dc.date.available | 2024-12-05T06:00:24Z | - |
dc.date.issued | 2024-12 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/erica/handle/2021.sw.erica/121169 | - |
dc.description.abstract | The recent trend of watching content using over-the-top (OTT) services pushes the 4D movie industry to seek a way of transformation. To address the issue, this paper suggests an AI-driven automatic 4D effects generation algorithm applied to a low-cost comfort chair. The system extracts multiple features using psychoacoustic analysis, saliency detection, optical flow, and an LLM-based thermal effect synthesis, and maps them into various sensory displays such as vibration, heat, wind, and poking automatically. To evaluate the system, a user study with 21 participants across seven film genres was conducted. The results showed that 1) there was a general improvement with 4D effects in terms of immersion, concentration, and expressiveness, and 2) multisensory effects were particularly useful in action and fantasy movie scenes. The suggested system could be directly used in current general video-on-demand services. | - |
dc.format.extent | 4 | - |
dc.language | 영어 | - |
dc.language.iso | ENG | - |
dc.publisher | ACM | - |
dc.title | Automatic Generation of Multimodal 4D Effects for Immersive Video Watching Experiences | - |
dc.type | Article | - |
dc.publisher.location | 미국 | - |
dc.identifier.doi | 10.1145/3681758.3698021 | - |
dc.identifier.scopusid | 2-s2.0-85214815067 | - |
dc.identifier.wosid | 001443093400025 | - |
dc.identifier.bibliographicCitation | SA '24: SIGGRAPH Asia 2024 Technical Communications, pp 1 - 4 | - |
dc.citation.title | SA '24: SIGGRAPH Asia 2024 Technical Communications | - |
dc.citation.startPage | 1 | - |
dc.citation.endPage | 4 | - |
dc.type.docType | Proceedings Paper | - |
dc.description.isOpenAccess | N | - |
dc.description.journalRegisteredClass | other | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalResearchArea | Imaging Science & Photographic Technology | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Interdisciplinary Applications | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Software Engineering | - |
dc.relation.journalWebOfScienceCategory | Imaging Science & Photographic Technology | - |
dc.subject.keywordAuthor | 4D films | - |
dc.subject.keywordAuthor | haptics | - |
dc.subject.keywordAuthor | multimodal | - |
dc.subject.keywordAuthor | AI-driven haptic effects generation | - |
dc.identifier.url | https://dl.acm.org/doi/10.1145/3681758.3698021 | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
55 Hanyangdeahak-ro, Sangnok-gu, Ansan, Gyeonggi-do, 15588, Korea+82-31-400-4269 sweetbrain@hanyang.ac.kr
COPYRIGHT © 2021 HANYANG UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.