Geometry-Incorporated Posing of a Full-Body Avatar from Sparse Trackersopen access
- Authors
- Anvari, Taravat; Park, Kyoungju
- Issue Date
- Aug-2023
- Publisher
- Institute of Electrical and Electronics Engineers Inc.
- Keywords
- 3D human pose estimation; Avatar; Avatars; Biomedical image processing; Learning systems; Mixed reality; Mixed Reality; Motion capture; Pelvis; Pose estimation; Real-time systems; Sensors; Three-dimensional displays; Time-domain analysis; Tracking; Virtual environments; Virtual Reality; Virtual reality
- Citation
- IEEE Access, v.11, pp 1 - 1
- Pages
- 1
- Journal Title
- IEEE Access
- Volume
- 11
- Start Page
- 1
- End Page
- 1
- URI
- https://scholarworks.bwise.kr/cau/handle/2019.sw.cau/67860
- DOI
- 10.1109/ACCESS.2023.3299323
- ISSN
- 2169-3536
- Abstract
- For embodied mixed reality(MR) experiences, it is crucial to accurately render the user’s full body in the virtual environment. Conventional MR systems provide sparse trackers such as a headset and two hand-held controllers. Recent studies have intensively investigated the learning methods to regress the untracked joints from the sparse trackers and produced plausible poses in real time for MR applications. However, most studies have assumed that they either know the position of the root joint or constrain it, yielding stiff pelvis motions. This paper presents the first geometry-incorporated learning method to generate the position and rotation of all joints, including the root joint, from the head and hands information for a wide range of motions. We split the problem into finding a reference frame and a pose inference with respect to a new reference frame. Our method defines an avatar frame by setting a non-joint as an origin and transforms joint data in a world coordinate system into the avatar coordinate system. Our learning builds on a propagating long short-term memory network exploiting prior knowledge of the kinematic chains and previous time domain. The learned joints are transformed back to obtain the positions with respect to the world frame. In our experiments, our method achieves competitive accuracy and robustness with the state-of-the-art speed of about 130 fps on motion capture datasets and the wild tracking data obtained from commercial MR devices. Our experiments confirm that the proposed method is practically applicable to MR systems. Author
- Files in This Item
-
- Appears in
Collections - College of Software > School of Computer Science and Engineering > 1. Journal Articles
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.