Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Modeling Two-Person Segmentation and Locomotion for Stereoscopic Action Identification: A Sustainable Video Surveillance Systemopen access

Authors
Khalid, NidaGochoo, MunkhjargalJalal, AhmadKim, Kibum
Issue Date
Jan-2021
Publisher
MDPI
Keywords
geodesic distance; human action recognition; human locomotion; neuro-fuzzy classifier; particle swarm optimization; RGB-D sensors; trajectory features
Citation
SUSTAINABILITY, v.13, no.2, pp.1 - 30
Indexed
SCIE
SSCI
SCOPUS
Journal Title
SUSTAINABILITY
Volume
13
Number
2
Start Page
1
End Page
30
URI
https://scholarworks.bwise.kr/erica/handle/2021.sw.erica/487
DOI
10.3390/su13020970
ISSN
2071-1050
Abstract
Due to the constantly increasing demand for automatic tracking and recognition systems, there is a need for more proficient, intelligent and sustainable human activity tracking. The main purpose of this study is to develop an accurate and sustainable human action tracking system that is capable of error-free identification of human movements irrespective of the environment in which those actions are performed. Therefore, in this paper we propose a stereoscopic Human Action Recognition (HAR) system based on the fusion of RGB (red, green, blue) and depth sensors. These sensors give an extra depth of information which enables the three-dimensional (3D) tracking of each and every movement performed by humans. Human actions are tracked according to four features, namely, (1) geodesic distance; (2) 3D Cartesian-plane features; (3) joints Motion Capture (MOCAP) features and (4) way-points trajectory generation. In order to represent these features in an optimized form, Particle Swarm Optimization (PSO) is applied. After optimization, a neuro-fuzzy classifier is used for classification and recognition. Extensive experimentation is performed on three challenging datasets: A Nanyang Technological University (NTU) RGB+D dataset; a UoL (University of Lincoln) 3D social activity dataset and a Collective Activity Dataset (CAD). Evaluation experiments on the proposed system proved that a fusion of vision sensors along with our unique features is an efficient approach towards developing a robust HAR system, having achieved a mean accuracy of 93.5% with the NTU RGB+D dataset, 92.2% with the UoL dataset and 89.6% with the Collective Activity dataset. The developed system can play a significant role in many computer vision-based applications, such as intelligent homes, offices and hospitals, and surveillance systems.
Files in This Item
Go to Link
Appears in
Collections
COLLEGE OF COMPUTING > SCHOOL OF MEDIA, CULTURE, AND DESIGN TECHNOLOGY > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Kim, Kibum photo

Kim, Kibum
COLLEGE OF COMPUTING (SCHOOL OF MEDIA, CULTURE, AND DESIGN TECHNOLOGY)
Read more

Altmetrics

Total Views & Downloads

BROWSE