Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

A Fusion-Assisted Multi-Stream Deep Learning and ESO-Controlled Newton-Raphson-Based Feature Selection Approach for Human Gait Recognition

Full metadata record
DC Field Value Language
dc.contributor.authorJahangir, Faiza-
dc.contributor.authorKhan, Muhammad Attique-
dc.contributor.authorAlhaisoni, Majed-
dc.contributor.authorAlqahtani, Abdullah-
dc.contributor.authorAlsubai, Shtwai-
dc.contributor.authorSha, Mohemmed-
dc.contributor.authorAl Hejaili, Abdullah-
dc.contributor.authorCha, Jae-hyuk-
dc.date.accessioned2023-05-03T09:45:49Z-
dc.date.available2023-05-03T09:45:49Z-
dc.date.created2023-04-06-
dc.date.issued2023-03-
dc.identifier.issn1424-8220-
dc.identifier.urihttps://scholarworks.bwise.kr/hanyang/handle/2021.sw.hanyang/184887-
dc.description.abstractThe performance of human gait recognition (HGR) is affected by the partial obstruction of the human body caused by the limited field of view in video surveillance. The traditional method required the bounding box to recognize human gait in the video sequences accurately; however, it is a challenging and time-consuming approach. Due to important applications, such as biometrics and video surveillance, HGR has improved performance over the last half-decade. Based on the literature, the challenging covariant factors that degrade gait recognition performance include walking while wearing a coat or carrying a bag. This paper proposed a new two-stream deep learning framework for human gait recognition. The first step proposed a contrast enhancement technique based on the local and global filters information fusion. The high-boost operation is finally applied to highlight the human region in a video frame. Data augmentation is performed in the second step to increase the dimension of the preprocessed dataset (CASIA-B). In the third step, two pre-trained deep learning models-MobilenetV2 and ShuffleNet-are fine-tuned and trained on the augmented dataset using deep transfer learning. Features are extracted from the global average pooling layer instead of the fully connected layer. In the fourth step, extracted features of both streams are fused using a serial-based approach and further refined in the fifth step by using an improved equilibrium state optimization-controlled Newton-Raphson (ESOcNR) selection method. The selected features are finally classified using machine learning algorithms for the final classification accuracy. The experimental process was conducted on 8 angles of the CASIA-B dataset and obtained an accuracy of 97.3, 98.6, 97.7, 96.5, 92.9, 93.7, 94.7, and 91.2%, respectively. Comparisons were conducted with state-of-the-art (SOTA) techniques, and showed improved accuracy and reduced computational time.-
dc.language영어-
dc.language.isoen-
dc.publisherMDPI-
dc.titleA Fusion-Assisted Multi-Stream Deep Learning and ESO-Controlled Newton-Raphson-Based Feature Selection Approach for Human Gait Recognition-
dc.typeArticle-
dc.contributor.affiliatedAuthorCha, Jae-hyuk-
dc.identifier.doi10.3390/s23052754-
dc.identifier.scopusid2-s2.0-85149799329-
dc.identifier.wosid000948287700001-
dc.identifier.bibliographicCitationSENSORS, v.23, no.5, pp.1 - 26-
dc.relation.isPartOfSENSORS-
dc.citation.titleSENSORS-
dc.citation.volume23-
dc.citation.number5-
dc.citation.startPage1-
dc.citation.endPage26-
dc.type.rimsART-
dc.type.docTypeArticle-
dc.description.journalClass1-
dc.description.isOpenAccessY-
dc.description.journalRegisteredClassscie-
dc.description.journalRegisteredClassscopus-
dc.relation.journalResearchAreaChemistry-
dc.relation.journalResearchAreaEngineering-
dc.relation.journalResearchAreaInstruments & Instrumentation-
dc.relation.journalWebOfScienceCategoryChemistry, Analytical-
dc.relation.journalWebOfScienceCategoryEngineering, Electrical & Electronic-
dc.relation.journalWebOfScienceCategoryInstruments & Instrumentation-
dc.subject.keywordPlusFeature Selection-
dc.subject.keywordPlusLearning algorithms-
dc.subject.keywordPlusLearning systems-
dc.subject.keywordPlusSecurity systems-
dc.subject.keywordPlusContrast Enhancement-
dc.subject.keywordPlusDeep learning-
dc.subject.keywordPlusFeatures selection-
dc.subject.keywordPlusGait recognition-
dc.subject.keywordPlusHuman bodies-
dc.subject.keywordPlusHuman gait recognition-
dc.subject.keywordPlusMachine-learning-
dc.subject.keywordPlusMulti-stream-
dc.subject.keywordPlusPerformance-
dc.subject.keywordPlusVideo surveillance-
dc.subject.keywordPlusalgorithm-
dc.subject.keywordPlusbiometry-
dc.subject.keywordPlusgait-
dc.subject.keywordPlushuman-
dc.subject.keywordPlusmachine learning-
dc.subject.keywordPlusprocedures-
dc.subject.keywordPlusDeep learning-
dc.subject.keywordAuthorgait recognition-
dc.subject.keywordAuthorcontrast enhancement-
dc.subject.keywordAuthordeep learning-
dc.subject.keywordAuthorfeature selection-
dc.subject.keywordAuthorfusion-
dc.subject.keywordAuthormachine learning-
dc.identifier.urlhttps://www.mdpi.com/1424-8220/23/5/2754-
Files in This Item
Appears in
Collections
서울 공과대학 > 서울 컴퓨터소프트웨어학부 > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Cha, Jae Hyuk photo

Cha, Jae Hyuk
COLLEGE OF ENGINEERING (SCHOOL OF COMPUTER SCIENCE)
Read more

Altmetrics

Total Views & Downloads

BROWSE