Image representation of pose-transition feature for 3D skeleton-based action recognition
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Thien Huynh-The | - |
dc.contributor.author | Hua, Cam-Hao | - |
dc.contributor.author | Trung-Thanh Ngo | - |
dc.contributor.author | Kim, Dong-Seong | - |
dc.date.available | 2020-04-24T09:24:42Z | - |
dc.date.created | 2020-03-31 | - |
dc.date.issued | 2020-03 | - |
dc.identifier.issn | 0020-0255 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/kumoh/handle/2020.sw.kumoh/96 | - |
dc.description.abstract | Recently, skeleton-based human action recognition has received more interest from industrial and research communities for many practical applications thanks to the popularity of depth sensors. A large number of conventional approaches, which have exploited handcrafted features with traditional classifiers, cannot learn high-level spatiotemporal features to precisely recognize complex human actions. In this paper, we introduce a novel encoding technique, namely Pose-Transition Feature to Image (PoT2I), to transform skeleton information to image-based representation for deep convolutional neural networks (CNNs). The spatial joint correlations and temporal pose dynamics of an action are exhaustively depicted by an encoded color image. For learning action models, we fine-tune end-to-end a pre-trained network to thoroughly capture multiple high-level features at multi-scale action representation. The proposed method is benchmarked on several challenging 3D action recognition datasets (e.g., UTKinect-Action3D, SBU-Kinect Interaction, and NTU RGB+D) with different parameter configurations for performance analysis. Outstanding experimental results with the highest accuracy of 90.33% on the most challenging NTU RGB+D dataset demonstrate that our action recognition method with PoT2I outperforms state-of-the-art approaches. (C) 2019 Elsevier Inc. All rights reserved. | - |
dc.language | 영어 | - |
dc.language.iso | en | - |
dc.publisher | ELSEVIER SCIENCE INC | - |
dc.subject | NETWORKS | - |
dc.title | Image representation of pose-transition feature for 3D skeleton-based action recognition | - |
dc.type | Article | - |
dc.contributor.affiliatedAuthor | Kim, Dong-Seong | - |
dc.identifier.doi | 10.1016/j.ins.2019.10.047 | - |
dc.identifier.scopusid | 2-s2.0-85075426030 | - |
dc.identifier.wosid | 000512221800007 | - |
dc.identifier.bibliographicCitation | INFORMATION SCIENCES, v.513, pp.112 - 126 | - |
dc.relation.isPartOf | INFORMATION SCIENCES | - |
dc.citation.title | INFORMATION SCIENCES | - |
dc.citation.volume | 513 | - |
dc.citation.startPage | 112 | - |
dc.citation.endPage | 126 | - |
dc.type.rims | ART | - |
dc.type.docType | Article | - |
dc.description.journalClass | 1 | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Information Systems | - |
dc.subject.keywordPlus | NETWORKS | - |
dc.subject.keywordAuthor | Pose-transition feature to image (PoT2I) encoding technique | - |
dc.subject.keywordAuthor | Depth camera | - |
dc.subject.keywordAuthor | Human action recognition | - |
dc.subject.keywordAuthor | Deep convolutional neural networks | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
350-27, Gumi-daero, Gumi-si, Gyeongsangbuk-do, Republic of Korea (39253)054-478-7170
COPYRIGHT 2020 Kumoh University All Rights Reserved.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.