Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Video retrieval of human interactions using model-based motion tracking and multi-layer finite state automata

Full metadata record
DC Field Value Language
dc.contributor.authorPark, S.-
dc.contributor.authorPark, J.-
dc.contributor.authorAggarwal, J.K.-
dc.date.accessioned2022-03-14T09:43:13Z-
dc.date.available2022-03-14T09:43:13Z-
dc.date.created2022-03-14-
dc.date.issued2003-
dc.identifier.issn0302-9743-
dc.identifier.urihttps://scholarworks.bwise.kr/hongik/handle/2020.sw.hongik/26597-
dc.description.abstractRecognition of human interactions in a video is useful for video annotation, automated surveillance, and content-based video retrieval. This paper presents a model-based approach to motion tracking and recognition of human interactions using multi-layer finite state automata (FA). The system is used for widely-available, static-background monocular surveillance videos. A three-dimensional human body model is built using a sphere and cylinders and is projected on a two-dimensional image plane to fit the foreground image silhouette. We convert the human motion tracking problem into a parameter optimization problem without the need to compute inverse kinematics. A cost functional is used to estimate the degree of the overlap between the foreground input image silhouette and a projected three-dimensional body model silhouette. Motion data obtained from the tracker is analyzed in terms of feet, torso, and hands by a behavior recognition system. The recognition model represents human behavior as a sequence of states that register the configuration of individual body parts in space and time. In order to overcome the exponential growth of the number of states that usually occurs in single-level FA, we propose a multi-layer FA that abstracts states and events from motion data at multiple levels: low-level FA analyzes body parts only, and high-level FA analyzes the human interaction. Motion tracking results from video sequences are presented. Our recognition framework successfully recognizes various human interactions such as approaching, departing, pushing, pointing, and handshaking. © Springer-Verlag Berlin Heidelberg 2003.-
dc.language영어-
dc.language.isoen-
dc.publisherSpringer Verlag-
dc.titleVideo retrieval of human interactions using model-based motion tracking and multi-layer finite state automata-
dc.typeArticle-
dc.contributor.affiliatedAuthorPark, J.-
dc.identifier.doi10.1007/3-540-45113-7_39-
dc.identifier.scopusid2-s2.0-35248828183-
dc.identifier.bibliographicCitationLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), v.2728, pp.394 - 403-
dc.relation.isPartOfLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)-
dc.citation.titleLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)-
dc.citation.volume2728-
dc.citation.startPage394-
dc.citation.endPage403-
dc.type.rimsART-
dc.type.docTypeArticle-
dc.description.journalClass1-
dc.description.journalRegisteredClassscopus-
Files in This Item
There are no files associated with this item.
Appears in
Collections
ETC > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Park, Ji hun photo

Park, Ji hun
Engineering (Department of Computer Engineering)
Read more

Altmetrics

Total Views & Downloads

BROWSE