Syntactic model-based human body 3D reconstruction and event classification via association based features mining and deep learning
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Ghadi, Yazeed | - |
dc.contributor.author | Akhter, Israr | - |
dc.contributor.author | Alarfaj, Mohammed | - |
dc.contributor.author | Jalal, Ahmad | - |
dc.contributor.author | Kim, Kibum | - |
dc.date.accessioned | 2022-10-07T09:19:15Z | - |
dc.date.available | 2022-10-07T09:19:15Z | - |
dc.date.issued | 2021-11 | - |
dc.identifier.issn | 2376-5992 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/erica/handle/2021.sw.erica/110441 | - |
dc.description.abstract | The study of human posture analysis and gait event detection from various types of inputs is a key contribution to the human life log. With the help of this research and technologies humans can save costs in terms of time and utility resources. In this paper we present a robust approach to human posture analysis and gait event detection from complex video-based data. For this, initially posture information, landmark information are extracted, and human 2D skeleton mesh are extracted, using this information set we reconstruct the human 2D to 3D model. Contextual features, namely, degrees of freedom over detected body parts, joint angle information, periodic and non-periodic motion, and human motion direction flow, are extracted. For features mining, we applied the rule-based features mining technique and, for gait event detection and classification, the deep learning-based CNN technique is applied over the mpii-video pose, the COCO, and the pose track datasets. For the mpii-video pose dataset, we achieved a human landmark detection mean accuracy of 87.09% and a gait event recognition mean accuracy of 90.90%. For the COCO dataset, we achieved a human landmark detection mean accuracy of 87.36% and a gait event recognition mean accuracy of 89.09%. For the pose track dataset, we achieved a human landmark detection mean accuracy of 87.72% and a gait event recognition mean accuracy of 88.18%. The proposed system performance shows a significant improvement compared to existing state-of-the-art frameworks. | - |
dc.format.extent | 36 | - |
dc.language | 영어 | - |
dc.language.iso | ENG | - |
dc.publisher | PeerJ Inc. | - |
dc.title | Syntactic model-based human body 3D reconstruction and event classification via association based features mining and deep learning | - |
dc.type | Article | - |
dc.publisher.location | 영국 | - |
dc.identifier.doi | 10.7717/peerj-cs.764 | - |
dc.identifier.scopusid | 2-s2.0-85121375790 | - |
dc.identifier.wosid | 000721214500001 | - |
dc.identifier.bibliographicCitation | PeerJ Computer Science, v.7, pp 1 - 36 | - |
dc.citation.title | PeerJ Computer Science | - |
dc.citation.volume | 7 | - |
dc.citation.startPage | 1 | - |
dc.citation.endPage | 36 | - |
dc.type.docType | Article | - |
dc.description.isOpenAccess | Y | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Artificial Intelligence | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Information Systems | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Theory & Methods | - |
dc.subject.keywordPlus | RECOGNITION | - |
dc.subject.keywordAuthor | 2D to 3D reconstruction | - |
dc.subject.keywordAuthor | Convolutional neural network | - |
dc.subject.keywordAuthor | Gait event classification | - |
dc.subject.keywordAuthor | Human posture analysis | - |
dc.subject.keywordAuthor | Landmark detection | - |
dc.subject.keywordAuthor | Synthetic model | - |
dc.subject.keywordAuthor | Silhouette optimization | - |
dc.identifier.url | https://peerj.com/articles/cs-764/ | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
55 Hanyangdeahak-ro, Sangnok-gu, Ansan, Gyeonggi-do, 15588, Korea+82-31-400-4269 sweetbrain@hanyang.ac.kr
COPYRIGHT © 2021 HANYANG UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.