WQuatNet: Wide range quaternion-based head pose estimation
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Algabri, Redhwan | - |
dc.contributor.author | Shin, Hyunsoo | - |
dc.contributor.author | Abdu, Ahmed | - |
dc.contributor.author | Bae, Ji-Hun | - |
dc.contributor.author | Lee, Sungon | - |
dc.date.accessioned | 2025-05-26T02:00:23Z | - |
dc.date.available | 2025-05-26T02:00:23Z | - |
dc.date.issued | 2025-04 | - |
dc.identifier.issn | 1319-1578 | - |
dc.identifier.issn | 2213-1248 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/erica/handle/2021.sw.erica/125340 | - |
dc.description.abstract | Head pose estimation (HPE) is a critical task for numerous applications ranging from human-computer interaction, healthcare, and robotics, to surveillance. Most existing methods employ Euler angles as a representation, which often face challenges such as a gimbal lock, especially in full-range rotation scenarios or rotation matrices that require nine parameters. This study introduces WQuatNet, a novel deep learning-based model that leverages the quaternion representation, which uses only four parameters, to avoid this challenge. WQuatNet was designed based on a landmark-free HPE method to predict head poses across the full-range angles of 360 degrees\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$<^>{\circ }$$\end{document} from images. Landmark-free methods bypass the need for explicit detection of facial landmarks; instead, they leverage the entire image to estimate the head orientation. The model incorporates a RepVGG-D2se backbone for robust feature extraction and introduces two loss functions tailored for quaternion predictions. Our experimental results on multiple HPE datasets covering both narrow- and full-range angles demonstrate that WQuatNet outperforms the state-of-the-art (SOTA) approaches in terms of accuracy. The performance of the proposed HPE was evaluated using the CMU, AGORA, BIWI, AFLW2000, and 300W-LP datasets. We also perform ablation studies and error analyses to validate the significance of each component of the model. | - |
dc.format.extent | 14 | - |
dc.language | 영어 | - |
dc.language.iso | ENG | - |
dc.publisher | SPRINGERNATURE | - |
dc.title | WQuatNet: Wide range quaternion-based head pose estimation | - |
dc.type | Article | - |
dc.publisher.location | 영국 | - |
dc.identifier.doi | 10.1007/s44443-025-00034-1 | - |
dc.identifier.scopusid | 2-s2.0-105002773769 | - |
dc.identifier.wosid | 001489188100001 | - |
dc.identifier.bibliographicCitation | JOURNAL OF KING SAUD UNIVERSITY-COMPUTER AND INFORMATION SCIENCES, v.37, no.3, pp 1 - 14 | - |
dc.citation.title | JOURNAL OF KING SAUD UNIVERSITY-COMPUTER AND INFORMATION SCIENCES | - |
dc.citation.volume | 37 | - |
dc.citation.number | 3 | - |
dc.citation.startPage | 1 | - |
dc.citation.endPage | 14 | - |
dc.type.docType | Article | - |
dc.description.isOpenAccess | Y | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Information Systems | - |
dc.subject.keywordPlus | DEPTH | - |
dc.subject.keywordAuthor | Quaternion | - |
dc.subject.keywordAuthor | Head pose estimation | - |
dc.subject.keywordAuthor | Deep neural network | - |
dc.subject.keywordAuthor | Full range of rotation | - |
dc.identifier.url | https://link.springer.com/article/10.1007/s44443-025-00034-1 | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
55 Hanyangdeahak-ro, Sangnok-gu, Ansan, Gyeonggi-do, 15588, Korea+82-31-400-4269 sweetbrain@hanyang.ac.kr
COPYRIGHT © 2021 HANYANG UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.