Stochastic Remote Sensing Event Classification over Adaptive Posture Estimation via Multifused Data and Deep Belief Network
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Gochoo, Munkhjargal | - |
dc.contributor.author | Akhter, Israr | - |
dc.contributor.author | Jalal, Ahmad | - |
dc.contributor.author | Kim, Kibum | - |
dc.date.accessioned | 2021-06-22T04:25:36Z | - |
dc.date.available | 2021-06-22T04:25:36Z | - |
dc.date.issued | 2021-03 | - |
dc.identifier.issn | 2072-4292 | - |
dc.identifier.issn | 2072-4292 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/erica/handle/2021.sw.erica/435 | - |
dc.description.abstract | Advances in video capturing devices enable adaptive posture estimation (APE) and event classification of multiple human-based videos for smart systems. Accurate event classification and adaptive posture estimation are still challenging domains, although researchers work hard to find solutions. In this research article, we propose a novel method to classify stochastic remote sensing events and to perform adaptive posture estimation. We performed human silhouette extraction using the Gaussian Mixture Model (GMM) and saliency map. After that, we performed human body part detection and used a unified pseudo-2D stick model for adaptive posture estimation. Multifused data that include energy, 3D Cartesian view, angular geometric, skeleton zigzag and moveable body parts were applied. Using a charged system search, we optimized our feature vector and deep belief network. We classified complex events, which were performed over sports videos in the wild (SVW), Olympic sports, UCF aerial action dataset and UT-interaction datasets. The mean accuracy of human body part detection was 83.57% over the UT-interaction, 83.00% for the Olympic sports and 83.78% for the SVW dataset. The mean event classification accuracy was 91.67% over the UT-interaction, 92.50% for Olympic sports and 89.47% for SVW dataset. These results are superior compared to existing state-of-the-art methods. | - |
dc.format.extent | 29 | - |
dc.language | 영어 | - |
dc.language.iso | ENG | - |
dc.publisher | MDPI | - |
dc.title | Stochastic Remote Sensing Event Classification over Adaptive Posture Estimation via Multifused Data and Deep Belief Network | - |
dc.type | Article | - |
dc.publisher.location | 스위스 | - |
dc.identifier.doi | 10.3390/rs13050912 | - |
dc.identifier.scopusid | 2-s2.0-85102237592 | - |
dc.identifier.wosid | 000628507700001 | - |
dc.identifier.bibliographicCitation | REMOTE SENSING, v.13, no.5, pp 1 - 29 | - |
dc.citation.title | REMOTE SENSING | - |
dc.citation.volume | 13 | - |
dc.citation.number | 5 | - |
dc.citation.startPage | 1 | - |
dc.citation.endPage | 29 | - |
dc.type.docType | Article | - |
dc.description.isOpenAccess | N | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Environmental Sciences & Ecology | - |
dc.relation.journalResearchArea | Geology | - |
dc.relation.journalResearchArea | Remote Sensing | - |
dc.relation.journalResearchArea | Imaging Science & Photographic Technology | - |
dc.relation.journalWebOfScienceCategory | Environmental Sciences | - |
dc.relation.journalWebOfScienceCategory | Geosciences, Multidisciplinary | - |
dc.relation.journalWebOfScienceCategory | Remote Sensing | - |
dc.relation.journalWebOfScienceCategory | Imaging Science & Photographic Technology | - |
dc.subject.keywordPlus | ACTIVITY RECOGNITION SYSTEM | - |
dc.subject.keywordPlus | LOW-RANK | - |
dc.subject.keywordPlus | VIDEOS | - |
dc.subject.keywordPlus | MODEL | - |
dc.subject.keywordPlus | FEATURES | - |
dc.subject.keywordPlus | SENSORS | - |
dc.subject.keywordAuthor | deep belief network | - |
dc.subject.keywordAuthor | event classification | - |
dc.subject.keywordAuthor | human body part detection | - |
dc.subject.keywordAuthor | multifused data | - |
dc.subject.keywordAuthor | pseudo-2D-stick model | - |
dc.identifier.url | https://www.mdpi.com/2072-4292/13/5/912 | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
55 Hanyangdeahak-ro, Sangnok-gu, Ansan, Gyeonggi-do, 15588, Korea+82-31-400-4269 sweetbrain@hanyang.ac.kr
COPYRIGHT © 2021 HANYANG UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.