Object boundary edge selection using level-of-detail canny edges
- Authors
- Park, J.; Park, S.
- Issue Date
- 2004
- Publisher
- Springer Verlag
- Citation
- Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), v.3046 LNCS, no.PART 4, pp.369 - 378
- Journal Title
- Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
- Volume
- 3046 LNCS
- Number
- PART 4
- Start Page
- 369
- End Page
- 378
- URI
- https://scholarworks.bwise.kr/hongik/handle/2020.sw.hongik/25884
- DOI
- 10.1007/978-3-540-24768-5_39
- ISSN
- 0302-9743
- Abstract
- Recently, Nguyen proposed a method[1] for tracking a non-parameterized object (subject) contour in a single video stream with a moving camera and changing background. Nguyen's approach combined outputs of two steps: creating a predicted contour and removing background edges. Nguyen's background edge removal method of leaving many irrelevant edges is subject to inaccurate contour tracking in a complex scene. Nguyen's method[1] of combining the predicted contour computed from the previous frame accumulates tracking error. We propose a brand-new method for tracking a nonparameterized subject contour in a single video stream with a moving camera and changing background. Our method is based on level-of-detail (LOD) Canny edge maps and graph-based routing operations on the LOD maps. We compute a predicted contour as Nguyen do. But to reduce side-effects because of irrelevant edges, we start our basic tracking using simple (strong) Canny edges generated from large image intensity gradients of an input image, called Scanny edges. Starting from Scanny edges, we get more edge pixels ranging from simple Canny edge maps untill the most detailed (weaker) Canny edge maps, called Wcanny maps. If Scanny edges are disconnected, routing between disconnected parts are planned using level-of-detail Canny edges, favoring stronger Canny edge pixels. Our accurate tracking is based on reducing effects from irrelevant edges by selecting the strongest edge pixels only, thereby relying on the current frame edge pixel as much as possible contrary to Nguyen's approach of always combining the previous contour. Our experimental results show that this tracking approach is robust enough to handle a complex-textured scene. © Springer-Verlag Berlin Heidelberg 2004.
- Files in This Item
- There are no files associated with this item.
- Appears in
Collections - ETC > 1. Journal Articles
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.