Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Model-based human motion tracking and behavior recognition using hierarchical finite state automata

Authors
Park, J.Park, S.Aggarwal, J.K.
Issue Date
2004
Publisher
Springer Verlag
Citation
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), v.3046 LNCS, no.PART 4, pp.311 - 320
Journal Title
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume
3046 LNCS
Number
PART 4
Start Page
311
End Page
320
URI
https://scholarworks.bwise.kr/hongik/handle/2020.sw.hongik/25882
DOI
10.1007/978-3-540-24768-5_33
ISSN
0302-9743
Abstract
The generation of motion of an articulated body for computer animation is an expensive and time-consuming task. Recognition of human actions and interactions is important to video annotation, automated surveillance, and content-based video retrieval. This paper presents a new model-based human-intervention-free approach to articulated body motion tracking and recognition of human interaction using static-background monocular video sequences. This paper presents two major applications based on basic motion tracking: motion capture and human behavior recognition. To determine a human body configuration in a scene, a 3D human body model is postulated and projected on a 2D projection plane to overlap with the foreground image silhouette. We convert the human model body overlapping problem into a parameter optimization problem to avoid the kinematic singularity problem. Unlike other methods, our body tracking does not need any user intervention. A cost function is used to estimate the degree of the overlapping between the foreground input image silhouette and a projected 3D model body silhouette. The configuration the best overlap with the foreground of the image least overlap with the background is sought. The overlapping is computed using computational geometry by converting a set of pixels from the image domain to a polygon in the 2D projection plane domain. We recognize human interaction motion using hierarchical finite state automata (FA). The model motion data we get from tracking is analyzed to get various states and events in terms of feet, torso, and hands by a low-level behavior recognition model. The recognition model represents human behaviors as sequences of states that classify the configuration of individual body parts in space and time. To overcome the exponential growth of the number of states that usually occurs in a single-level FA, we present a new hierarchical FA that abstracts states and events from motion data at three levels: the low-level FA analyzes body parts only, © Springer-Verlag Berlin Heidelberg 2004.
Files in This Item
There are no files associated with this item.
Appears in
Collections
ETC > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Park, Ji hun photo

Park, Ji hun
Engineering (Department of Computer Engineering)
Read more

Altmetrics

Total Views & Downloads

BROWSE