Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

A resource conscious human action recognition framework using 26-layered deep convolutional neural network

Full metadata record
DC Field Value Language
dc.contributor.authorKhan, M.A.-
dc.contributor.authorZhang, Y.-D.-
dc.contributor.authorKhan, S.A.-
dc.contributor.authorAttique, M.-
dc.contributor.authorRehman, A.-
dc.contributor.authorSeo, S.-
dc.date.accessioned2023-03-08T10:11:10Z-
dc.date.available2023-03-08T10:11:10Z-
dc.date.issued2021-11-
dc.identifier.issn1380-7501-
dc.identifier.issn1432-1882-
dc.identifier.urihttps://scholarworks.bwise.kr/cau/handle/2019.sw.cau/62113-
dc.description.abstractVision-based human action recognition (HAR) is a hot topic of research from the decade due to a few popular applications such as visual surveillance and robotics. For correct action recognition, various local and global points are requires known as features. These features modified during the variation in human movement. But due to a bit change in several human actions, the features of these actions are mixed that degrade the recognition performance. In this article, we design a new 26-layered Convolutional Neural Network (CNN) architecture for accurate complex action recognition. The features are extracted from the global average pooling layer and fully connected (FC) layer, and fused by a proposed high entropy-based approach. Further, we propose a feature selection method name Poisson distribution along with Univariate Measures (PDaUM). Few of fused CNN features are irrelevant, and few of them are redundant that makes the incorrect prediction among complex human actions. Therefore, the proposed PDaUM based approach selects only the strongest features that later passed to the Extreme Learning Machine (ELM) and Softmax for final recognition. Four datasets are using for experimental analysis - HMDB51 (51 classes), UCF Sports (10 classes), KTH (6 classes), and Weizmann (10 classes). On these datasets, the ELM classifier gives an improved performance as compared to a Softmax classifier. The achieved accuracy on each dataset is 81.4%, 99.2%, 98.3%, and 98.7%, respectively. Comparison with existing techniques, it is shown that the proposed architecture gives better performance in terms of accuracy and testing time. © 2020, Springer Science+Business Media, LLC, part of Springer Nature.-
dc.format.extent23-
dc.language영어-
dc.language.isoENG-
dc.publisherSpringer-
dc.titleA resource conscious human action recognition framework using 26-layered deep convolutional neural network-
dc.typeArticle-
dc.identifier.doi10.1007/s11042-020-09408-1-
dc.identifier.bibliographicCitationMultimedia Tools and Applications, v.80, no.28-29, pp 35827 - 35849-
dc.description.isOpenAccessN-
dc.identifier.scopusid2-s2.0-85088858962-
dc.citation.endPage35849-
dc.citation.number28-29-
dc.citation.startPage35827-
dc.citation.titleMultimedia Tools and Applications-
dc.citation.volume80-
dc.type.docTypeArticle-
dc.publisher.location네델란드-
dc.subject.keywordAuthorAction recognition-
dc.subject.keywordAuthorCNN architecture-
dc.subject.keywordAuthorELM-
dc.subject.keywordAuthorFeatures fusion-
dc.subject.keywordAuthorFeatures selection-
dc.subject.keywordPlusClassification (of information)-
dc.subject.keywordPlusComplex networks-
dc.subject.keywordPlusConvolution-
dc.subject.keywordPlusConvolutional neural networks-
dc.subject.keywordPlusDeep neural networks-
dc.subject.keywordPlusLearning systems-
dc.subject.keywordPlusNetwork architecture-
dc.subject.keywordPlusPoisson distribution-
dc.subject.keywordPlusAction recognition-
dc.subject.keywordPlusEntropy based approach-
dc.subject.keywordPlusExperimental analysis-
dc.subject.keywordPlusExtreme learning machine-
dc.subject.keywordPlusFeature selection methods-
dc.subject.keywordPlusHuman-action recognition-
dc.subject.keywordPlusProposed architectures-
dc.subject.keywordPlusVisual surveillance-
dc.subject.keywordPlusSocial robots-
dc.description.journalRegisteredClassscie-
dc.description.journalRegisteredClassscopus-
Files in This Item
There are no files associated with this item.
Appears in
Collections
ETC > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Seo, Sang Hyun photo

Seo, Sang Hyun
예술공학대학 (예술공학부)
Read more

Altmetrics

Total Views & Downloads

BROWSE