Full-body Avatar Generation for Increased Embodiment
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Kim, Dohyung | - |
dc.contributor.author | Koo, Jun | - |
dc.contributor.author | Hwang, Jewoong | - |
dc.contributor.author | Seo, Minjae | - |
dc.contributor.author | Jung, Inhyung | - |
dc.contributor.author | Park, Kyoungju | - |
dc.date.accessioned | 2024-07-12T05:30:31Z | - |
dc.date.available | 2024-07-12T05:30:31Z | - |
dc.date.issued | 2024 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/cau/handle/2019.sw.cau/74713 | - |
dc.description.abstract | Having a virtual full-body increases the embodiment of users in virtual reality(VR) applications. As consumer-grade virtual reality systems provide the head and two hand-held sensors, full-body motion generation is a vastly under-determined problem. Consequently, most current VR applications show the upper-body only using an inverse kinematics method from three given sensors or wearing the additional sensors on the lower body. Recent studies have focused on neural network methods to generate a full-body avatar from three sparse sensors. Although neural network methods produce notable results with low error metrics in motion capture datasets, it is challenging to apply to online VR systems. Technical issues include that VR sensors fluctuate and lose connection, hand-held controllers orient differently depending on the user's grabbing style, the coordinate system of VR platforms differs from that of motion capture datasets, and the global position and orientation of the users vary in a virtual world. We address these technical issues and present a solution to generate a full-body avatar in VR systems. We compare our method with other full-body generation methods on online VR systems. Then, we show how our method works in two kinds of online VR tasks, a single-user obstacle task and a multi-user catch ball task, and conduct a user study for embodiment and preferences. © 2024 IEEE. | - |
dc.format.extent | 3 | - |
dc.language | 영어 | - |
dc.language.iso | ENG | - |
dc.publisher | Institute of Electrical and Electronics Engineers Inc. | - |
dc.title | Full-body Avatar Generation for Increased Embodiment | - |
dc.type | Article | - |
dc.identifier.doi | 10.1109/VRW62533.2024.00328 | - |
dc.identifier.bibliographicCitation | Proceedings - 2024 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops, VRW 2024, pp 1063 - 1065 | - |
dc.description.isOpenAccess | N | - |
dc.identifier.scopusid | 2-s2.0-85195575779 | - |
dc.citation.endPage | 1065 | - |
dc.citation.startPage | 1063 | - |
dc.citation.title | Proceedings - 2024 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops, VRW 2024 | - |
dc.type.docType | Conference paper | - |
dc.subject.keywordAuthor | Artificial intelligence | - |
dc.subject.keywordAuthor | Compute Vision | - |
dc.subject.keywordAuthor | Computer Graphics | - |
dc.subject.keywordAuthor | Computing methodologies | - |
dc.subject.keywordAuthor | Graphics systems and interfaces | - |
dc.subject.keywordAuthor | Motion capture | - |
dc.subject.keywordAuthor | Virtual reality | - |
dc.subject.keywordAuthor | Computing methodologies | - |
dc.description.journalRegisteredClass | scopus | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
84, Heukseok-ro, Dongjak-gu, Seoul, Republic of Korea (06974)02-820-6194
COPYRIGHT 2019 Chung-Ang University All Rights Reserved.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.