Object tracking and elimination using Level-of-Detail canny edge maps
- Authors
- Park, J.
- Issue Date
- 2006
- Publisher
- Springer Verlag
- Citation
- Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), v.4069 LNCS, pp.281 - 290
- Journal Title
- Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
- Volume
- 4069 LNCS
- Start Page
- 281
- End Page
- 290
- URI
- https://scholarworks.bwise.kr/hongik/handle/2020.sw.hongik/25019
- DOI
- 10.1007/11789239_29
- ISSN
- 0302-9743
- Abstract
- We propose a method for tracking a nonparameterized subject contour in a single video stream with a moving camera. Then we eliminate the tracked contour object by replacing the background scene we get from other frame that is not occluded by the tracked object. Our method consists of two parts: first we track the object using LOD (Level-of-Detail) canny edge maps, then we generate background of each image frame and replace the tracked object in a scene by a background image from other frame. In order to track a contour object, LOD Canny edge maps are generated by changing scale parameters for a given image. A simple (strong) Canny edge map has the smallest number of edge pixels while the most detailed Canny edge map, WcannyN, has the largest number of edge pixels. To reduce side-effects because of irrelevant edges, we start our basic tracking by using simple (strong) Canny edges generated from large image intensity gradients of an input image, called Scanny edges. Starting from Scanny edges, we get more edge pixels ranging from simple Canny edge maps until the most detailed (weaker) Canny edge maps, called Wcanny maps along LOD hierarchy. LOD Canny edge pixels become nodes in routing, and LOD values of adjacent edge pixels determine routing costs between the nodes. We find the best route to follow Canny edge pixels favoring stronger Canny edge pixels. In order to remove the tracked object, we generate approximated background for the first frame. Background images for subsequent frames are based on the first frame background or previous frame images. This approach is based on computing camera motion, camera movement between two image frames. Our method works nice for moderate camera movement with small object shape changes. © Springer-Verlag Berlin Heidelberg 2006.
- Files in This Item
- There are no files associated with this item.
- Appears in
Collections - College of Engineering > Computer Engineering > Journal Articles
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.