Robust Feature Tracking in DVS Event Stream using Bezier Mapping
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Seok, Hochang | - |
dc.contributor.author | Lim, Jong woo | - |
dc.date.accessioned | 2022-07-08T09:22:07Z | - |
dc.date.available | 2022-07-08T09:22:07Z | - |
dc.date.created | 2021-05-14 | - |
dc.date.issued | 2020-03 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/hanyang/handle/2021.sw.hanyang/145977 | - |
dc.description.abstract | Unlike conventional cameras, event cameras capture the intensity changes at each pixel with very little delay. Such changes are recorded as an event stream with their positions, timestamps, and polarities continuously, thus there is no notion of `frame' as in conventional cameras. As many applications including 3D pose estimation use 2D trajectories of feature points, it is necessary to detect and track the feature points robustly and accurately in a continuous event stream. In conventional feature tracking algorithms for event streams, the events in fixed time intervals are converted into the event images by stacking the events at their pixel locations, and the features are tracked in the event images. Such simple stacking of events yields blurry event images due to the camera motion, and it can significantly degrade the tracking quality. We propose to align the events in the time intervals along Bézier curves to minimize the misalignment. Since the camera motion is unknown, the Bézier curve is estimated to maximize the variance of the warped event pixels. Instead of the initial patches for tracking, we use the temporally integrated template patches, as it captures rich texture information from accurately aligned events. Extensive experimental evaluations in 2D feature tracking as well as 3D pose estimation show that our method significantly outperforms the conventional approaches. | - |
dc.language | 영어 | - |
dc.language.iso | en | - |
dc.publisher | IEEE | - |
dc.title | Robust Feature Tracking in DVS Event Stream using Bezier Mapping | - |
dc.type | Article | - |
dc.contributor.affiliatedAuthor | Lim, Jong woo | - |
dc.identifier.doi | 10.1109/WACV45572.2020.9093607 | - |
dc.identifier.bibliographicCitation | 2020 IEEE Winter Conference on Applications of Computer Vision (WACV), pp.1658 - 1667 | - |
dc.relation.isPartOf | 2020 IEEE Winter Conference on Applications of Computer Vision (WACV) | - |
dc.citation.title | 2020 IEEE Winter Conference on Applications of Computer Vision (WACV) | - |
dc.citation.startPage | 1658 | - |
dc.citation.endPage | 1667 | - |
dc.type.rims | ART | - |
dc.type.docType | Proceeding | - |
dc.description.journalClass | 3 | - |
dc.description.isOpenAccess | N | - |
dc.description.journalRegisteredClass | other | - |
dc.identifier.url | https://ieeexplore.ieee.org/document/9093607/metrics#metrics | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
222, Wangsimni-ro, Seongdong-gu, Seoul, 04763, Korea+82-2-2220-1365
COPYRIGHT © 2021 HANYANG UNIVERSITY.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.