VISUALCENT: Visual Human Analysis using Dynamic Centroid Representation
DC Field | Value | Language |
---|---|---|
dc.contributor.author | 이영문 | - |
dc.date.accessioned | 2025-05-01T07:30:26Z | - |
dc.date.available | 2025-05-01T07:30:26Z | - |
dc.date.issued | 2025-05 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/erica/handle/2021.sw.erica/125173 | - |
dc.description.abstract | We introduce VISUALCENT, a unified human pose and instance segmentation framework to address generalizability and scalability limitations to multi person visual human analysis. VISUALCENT leverages centroid based bottom up keypoint detection paradigm and uses Keypoint Heatmap incorporating Disk Representation and KeyCentroid to identify the optimal keypoint coordinates. For the unified segmentation task, an explicit keypoint is defined as a dynamic centroid called MaskCentroid to swiftly cluster pixels to specific human instance during rapid changes in human body movement or significantly occluded environment. Experimental results on COCO and OCHuman datasets demonstrate VISUALCENTs accuracy and real time performance advantages, outperforming existing methods in mAP scores and execution frame rate per second. The implementation is available on the project page. | - |
dc.format.extent | 5 | - |
dc.language | 영어 | - |
dc.language.iso | ENG | - |
dc.publisher | IEEE | - |
dc.title | VISUALCENT: Visual Human Analysis using Dynamic Centroid Representation | - |
dc.type | Article | - |
dc.identifier.doi | 10.48550/arXiv.2504.19032 Focus to learn more | - |
dc.identifier.bibliographicCitation | IEEE International Conference on Automatic Face and Gesture Recognition, pp 1 - 5 | - |
dc.citation.title | IEEE International Conference on Automatic Face and Gesture Recognition | - |
dc.citation.startPage | 1 | - |
dc.citation.endPage | 5 | - |
dc.type.docType | Proceeding | - |
dc.description.isOpenAccess | N | - |
dc.description.journalRegisteredClass | foreign | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
55 Hanyangdeahak-ro, Sangnok-gu, Ansan, Gyeonggi-do, 15588, Korea+82-31-400-4269 sweetbrain@hanyang.ac.kr
COPYRIGHT © 2021 HANYANG UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.