A Fusion-Assisted Multi-Stream Deep Learning and ESO-Controlled Newton-Raphson-Based Feature Selection Approach for Human Gait Recognition
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Jahangir, Faiza | - |
dc.contributor.author | Khan, Muhammad Attique | - |
dc.contributor.author | Alhaisoni, Majed | - |
dc.contributor.author | Alqahtani, Abdullah | - |
dc.contributor.author | Alsubai, Shtwai | - |
dc.contributor.author | Sha, Mohemmed | - |
dc.contributor.author | Al Hejaili, Abdullah | - |
dc.contributor.author | Cha, Jae-hyuk | - |
dc.date.accessioned | 2023-05-03T09:45:49Z | - |
dc.date.available | 2023-05-03T09:45:49Z | - |
dc.date.created | 2023-04-06 | - |
dc.date.issued | 2023-03 | - |
dc.identifier.issn | 1424-8220 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/hanyang/handle/2021.sw.hanyang/184887 | - |
dc.description.abstract | The performance of human gait recognition (HGR) is affected by the partial obstruction of the human body caused by the limited field of view in video surveillance. The traditional method required the bounding box to recognize human gait in the video sequences accurately; however, it is a challenging and time-consuming approach. Due to important applications, such as biometrics and video surveillance, HGR has improved performance over the last half-decade. Based on the literature, the challenging covariant factors that degrade gait recognition performance include walking while wearing a coat or carrying a bag. This paper proposed a new two-stream deep learning framework for human gait recognition. The first step proposed a contrast enhancement technique based on the local and global filters information fusion. The high-boost operation is finally applied to highlight the human region in a video frame. Data augmentation is performed in the second step to increase the dimension of the preprocessed dataset (CASIA-B). In the third step, two pre-trained deep learning models-MobilenetV2 and ShuffleNet-are fine-tuned and trained on the augmented dataset using deep transfer learning. Features are extracted from the global average pooling layer instead of the fully connected layer. In the fourth step, extracted features of both streams are fused using a serial-based approach and further refined in the fifth step by using an improved equilibrium state optimization-controlled Newton-Raphson (ESOcNR) selection method. The selected features are finally classified using machine learning algorithms for the final classification accuracy. The experimental process was conducted on 8 angles of the CASIA-B dataset and obtained an accuracy of 97.3, 98.6, 97.7, 96.5, 92.9, 93.7, 94.7, and 91.2%, respectively. Comparisons were conducted with state-of-the-art (SOTA) techniques, and showed improved accuracy and reduced computational time. | - |
dc.language | 영어 | - |
dc.language.iso | en | - |
dc.publisher | MDPI | - |
dc.title | A Fusion-Assisted Multi-Stream Deep Learning and ESO-Controlled Newton-Raphson-Based Feature Selection Approach for Human Gait Recognition | - |
dc.type | Article | - |
dc.contributor.affiliatedAuthor | Cha, Jae-hyuk | - |
dc.identifier.doi | 10.3390/s23052754 | - |
dc.identifier.scopusid | 2-s2.0-85149799329 | - |
dc.identifier.wosid | 000948287700001 | - |
dc.identifier.bibliographicCitation | SENSORS, v.23, no.5, pp.1 - 26 | - |
dc.relation.isPartOf | SENSORS | - |
dc.citation.title | SENSORS | - |
dc.citation.volume | 23 | - |
dc.citation.number | 5 | - |
dc.citation.startPage | 1 | - |
dc.citation.endPage | 26 | - |
dc.type.rims | ART | - |
dc.type.docType | Article | - |
dc.description.journalClass | 1 | - |
dc.description.isOpenAccess | Y | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Chemistry | - |
dc.relation.journalResearchArea | Engineering | - |
dc.relation.journalResearchArea | Instruments & Instrumentation | - |
dc.relation.journalWebOfScienceCategory | Chemistry, Analytical | - |
dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
dc.relation.journalWebOfScienceCategory | Instruments & Instrumentation | - |
dc.subject.keywordPlus | Feature Selection | - |
dc.subject.keywordPlus | Learning algorithms | - |
dc.subject.keywordPlus | Learning systems | - |
dc.subject.keywordPlus | Security systems | - |
dc.subject.keywordPlus | Contrast Enhancement | - |
dc.subject.keywordPlus | Deep learning | - |
dc.subject.keywordPlus | Features selection | - |
dc.subject.keywordPlus | Gait recognition | - |
dc.subject.keywordPlus | Human bodies | - |
dc.subject.keywordPlus | Human gait recognition | - |
dc.subject.keywordPlus | Machine-learning | - |
dc.subject.keywordPlus | Multi-stream | - |
dc.subject.keywordPlus | Performance | - |
dc.subject.keywordPlus | Video surveillance | - |
dc.subject.keywordPlus | algorithm | - |
dc.subject.keywordPlus | biometry | - |
dc.subject.keywordPlus | gait | - |
dc.subject.keywordPlus | human | - |
dc.subject.keywordPlus | machine learning | - |
dc.subject.keywordPlus | procedures | - |
dc.subject.keywordPlus | Deep learning | - |
dc.subject.keywordAuthor | gait recognition | - |
dc.subject.keywordAuthor | contrast enhancement | - |
dc.subject.keywordAuthor | deep learning | - |
dc.subject.keywordAuthor | feature selection | - |
dc.subject.keywordAuthor | fusion | - |
dc.subject.keywordAuthor | machine learning | - |
dc.identifier.url | https://www.mdpi.com/1424-8220/23/5/2754 | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
222, Wangsimni-ro, Seongdong-gu, Seoul, 04763, Korea+82-2-2220-1365
COPYRIGHT © 2021 HANYANG UNIVERSITY.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.