A resource conscious human action recognition framework using 26-layered deep convolutional neural network
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Khan, M.A. | - |
dc.contributor.author | Zhang, Y.-D. | - |
dc.contributor.author | Khan, S.A. | - |
dc.contributor.author | Attique, M. | - |
dc.contributor.author | Rehman, A. | - |
dc.contributor.author | Seo, S. | - |
dc.date.accessioned | 2023-03-08T10:11:10Z | - |
dc.date.available | 2023-03-08T10:11:10Z | - |
dc.date.issued | 2021-11 | - |
dc.identifier.issn | 1380-7501 | - |
dc.identifier.issn | 1432-1882 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/cau/handle/2019.sw.cau/62113 | - |
dc.description.abstract | Vision-based human action recognition (HAR) is a hot topic of research from the decade due to a few popular applications such as visual surveillance and robotics. For correct action recognition, various local and global points are requires known as features. These features modified during the variation in human movement. But due to a bit change in several human actions, the features of these actions are mixed that degrade the recognition performance. In this article, we design a new 26-layered Convolutional Neural Network (CNN) architecture for accurate complex action recognition. The features are extracted from the global average pooling layer and fully connected (FC) layer, and fused by a proposed high entropy-based approach. Further, we propose a feature selection method name Poisson distribution along with Univariate Measures (PDaUM). Few of fused CNN features are irrelevant, and few of them are redundant that makes the incorrect prediction among complex human actions. Therefore, the proposed PDaUM based approach selects only the strongest features that later passed to the Extreme Learning Machine (ELM) and Softmax for final recognition. Four datasets are using for experimental analysis - HMDB51 (51 classes), UCF Sports (10 classes), KTH (6 classes), and Weizmann (10 classes). On these datasets, the ELM classifier gives an improved performance as compared to a Softmax classifier. The achieved accuracy on each dataset is 81.4%, 99.2%, 98.3%, and 98.7%, respectively. Comparison with existing techniques, it is shown that the proposed architecture gives better performance in terms of accuracy and testing time. © 2020, Springer Science+Business Media, LLC, part of Springer Nature. | - |
dc.format.extent | 23 | - |
dc.language | 영어 | - |
dc.language.iso | ENG | - |
dc.publisher | Springer | - |
dc.title | A resource conscious human action recognition framework using 26-layered deep convolutional neural network | - |
dc.type | Article | - |
dc.identifier.doi | 10.1007/s11042-020-09408-1 | - |
dc.identifier.bibliographicCitation | Multimedia Tools and Applications, v.80, no.28-29, pp 35827 - 35849 | - |
dc.description.isOpenAccess | N | - |
dc.identifier.scopusid | 2-s2.0-85088858962 | - |
dc.citation.endPage | 35849 | - |
dc.citation.number | 28-29 | - |
dc.citation.startPage | 35827 | - |
dc.citation.title | Multimedia Tools and Applications | - |
dc.citation.volume | 80 | - |
dc.type.docType | Article | - |
dc.publisher.location | 네델란드 | - |
dc.subject.keywordAuthor | Action recognition | - |
dc.subject.keywordAuthor | CNN architecture | - |
dc.subject.keywordAuthor | ELM | - |
dc.subject.keywordAuthor | Features fusion | - |
dc.subject.keywordAuthor | Features selection | - |
dc.subject.keywordPlus | Classification (of information) | - |
dc.subject.keywordPlus | Complex networks | - |
dc.subject.keywordPlus | Convolution | - |
dc.subject.keywordPlus | Convolutional neural networks | - |
dc.subject.keywordPlus | Deep neural networks | - |
dc.subject.keywordPlus | Learning systems | - |
dc.subject.keywordPlus | Network architecture | - |
dc.subject.keywordPlus | Poisson distribution | - |
dc.subject.keywordPlus | Action recognition | - |
dc.subject.keywordPlus | Entropy based approach | - |
dc.subject.keywordPlus | Experimental analysis | - |
dc.subject.keywordPlus | Extreme learning machine | - |
dc.subject.keywordPlus | Feature selection methods | - |
dc.subject.keywordPlus | Human-action recognition | - |
dc.subject.keywordPlus | Proposed architectures | - |
dc.subject.keywordPlus | Visual surveillance | - |
dc.subject.keywordPlus | Social robots | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
84, Heukseok-ro, Dongjak-gu, Seoul, Republic of Korea (06974)02-820-6194
COPYRIGHT 2019 Chung-Ang University All Rights Reserved.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.