Real-Time Human Action Prediction via Pose Kinematics
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Ahmad, Niaz | - |
dc.contributor.author | Ullah, Saif | - |
dc.contributor.author | Khan, Jawad | - |
dc.contributor.author | Lee, Youngmoon | - |
dc.date.accessioned | 2024-12-04T07:00:34Z | - |
dc.date.available | 2024-12-04T07:00:34Z | - |
dc.date.issued | 2025-11 | - |
dc.identifier.issn | 1865-0929 | - |
dc.identifier.issn | 1865-0937 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/erica/handle/2021.sw.erica/121147 | - |
dc.description.abstract | Recognizing human actions in real time poses a fundamental challenge especially when those actions coordinate with humans or objects in a shared space. Such systems must be capable of recognizing and assessing real-world human actions from different angles and viewpoints. Therefore, a significant volume of multi-dimensional human action training data is needed to enable data-driven algorithms to operate effectively in real-world scenarios. This paper introduces the Action Clip dataset, which offers a 360o view of human action with rich features from multiple angles, and describes the design and implementation of Human Action Prediction via Pose Kinematics (HAPtics), a comprehensive pipeline for real-time human pose estimation and action recognition, achievable with standard monocular camera sensors. HAPtics transforms initially noisy human pose kinematic structures into skeletal features using convolutional layers, such as body velocity, joint velocity, joint angles, and limb length from joint positions, fed into a classification layer for action recognition. Results on our proposed Action Clip dataset, NW-UCLA, and NTU RGB+D 120, demonstrate competitive state-of-the-art performance in pose-based action recognition and real-time performance running at 30 frames per second on a live camera. Code and dataset are made available to the public at: https://github.com/RaiseLab/HAPtics. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2025. | - |
dc.format.extent | 16 | - |
dc.language | 영어 | - |
dc.language.iso | ENG | - |
dc.publisher | Springer Science and Business Media Deutschland GmbH | - |
dc.title | Real-Time Human Action Prediction via Pose Kinematics | - |
dc.type | Article | - |
dc.publisher.location | 독일 | - |
dc.identifier.doi | 10.1007/978-981-97-9003-6_5 | - |
dc.identifier.scopusid | 2-s2.0-85210257252 | - |
dc.identifier.bibliographicCitation | Communications in Computer and Information Science, v.2201 CCIS, pp 69 - 84 | - |
dc.citation.title | Communications in Computer and Information Science | - |
dc.citation.volume | 2201 CCIS | - |
dc.citation.startPage | 69 | - |
dc.citation.endPage | 84 | - |
dc.type.docType | Conference paper | - |
dc.description.isOpenAccess | N | - |
dc.description.journalRegisteredClass | scopus | - |
dc.identifier.url | https://link.springer.com/chapter/10.1007/978-981-97-9003-6_5 | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
55 Hanyangdeahak-ro, Sangnok-gu, Ansan, Gyeonggi-do, 15588, Korea+82-31-400-4269 sweetbrain@hanyang.ac.kr
COPYRIGHT © 2021 HANYANG UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.