Pose Estimation and Detection for Event Recognition using Sense-Aware Features and Adaboost Classifier
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Akhter,Israr | - |
dc.contributor.author | Jalal,Ahmad | - |
dc.contributor.author | Kim, Kibum | - |
dc.date.accessioned | 2023-08-16T07:35:55Z | - |
dc.date.available | 2023-08-16T07:35:55Z | - |
dc.date.issued | 2021-01 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/erica/handle/2021.sw.erica/113921 | - |
dc.description.abstract | To examine events identification and recognition in sequential images, approaches used several parameters such as size, location or position of the human body parts along with its surrounding effects. In this paper, we estimated several body key points to monitor and track its appearance in complex events. Such detection of key body parts need several features descriptor such as optical flow, moving body parts and 0-180° intensity characteristics for event identification. While, extraction architecture model used human body key points through the R-Transform factor having intensity values. After intensity values extraction, the estimation of moveable body parts are applied for change detection system. In addition, optical flow features are also extracted. The extracted intensity and moveable body features are merged with optical flow vectors. These extracted features are injected into pre-classifier as Particle Swarm optimization and recognizer engine as Adaboost. The experimental results on a challenging videos datasets such as UCF 101 and YouTube show significant accuracy 75.33% and 76.66% of event recognition. We achieved higher performance of body-parts detection and event recognition as compare to state-of-art approaches. © 2021 IEEE. | - |
dc.format.extent | 6 | - |
dc.language | 영어 | - |
dc.language.iso | ENG | - |
dc.publisher | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC | - |
dc.title | Pose Estimation and Detection for Event Recognition using Sense-Aware Features and Adaboost Classifier | - |
dc.type | Article | - |
dc.publisher.location | 미국 | - |
dc.identifier.doi | 10.1109/IBCAST51254.2021.9393293 | - |
dc.identifier.bibliographicCitation | 2021 International Bhurban Conference on Applied Sciences and Technologies (IBCAST), pp 500 - 505 | - |
dc.citation.title | 2021 International Bhurban Conference on Applied Sciences and Technologies (IBCAST) | - |
dc.citation.startPage | 500 | - |
dc.citation.endPage | 505 | - |
dc.type.docType | Proceeding | - |
dc.description.isOpenAccess | N | - |
dc.description.journalRegisteredClass | scopus | - |
dc.subject.keywordAuthor | 0-180 degree intensity features | - |
dc.subject.keywordAuthor | 2D stick model | - |
dc.subject.keywordAuthor | Adaboost | - |
dc.subject.keywordAuthor | Human event recognition | - |
dc.identifier.url | https://ieeexplore.ieee.org/document/9393293?arnumber=9393293&SID=EBSCO:edseee | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
55 Hanyangdeahak-ro, Sangnok-gu, Ansan, Gyeonggi-do, 15588, Korea+82-31-400-4269 sweetbrain@hanyang.ac.kr
COPYRIGHT © 2021 HANYANG UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.