VisualCent: Visual Human Analysis using Dynamic Centroid Representation
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Ahmad, Niaz | - |
dc.contributor.author | Lee, Youngmoon | - |
dc.contributor.author | Wang, Guanghui | - |
dc.date.accessioned | 2025-09-16T07:30:20Z | - |
dc.date.available | 2025-09-16T07:30:20Z | - |
dc.date.issued | 2025-05 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/erica/handle/2021.sw.erica/126456 | - |
dc.description.abstract | We introduce VisualCent, a unified human pose and instance segmentation framework to address generalizability and scalability limitations to multi-person visual human analysis. VisualCent leverages centroid-based bottomup keypoint detection paradigm and uses Keypoint Heatmap incorporating Disk Representation and KeyCentroid to identify the optimal keypoint coordinates. For the unified segmentation task, an explicit keypoint is defined as a dynamic centroid called MaskCentroid to swiftly cluster pixels to specific human instance during rapid changes in human body movement or significantly occluded environment. Experimental results on COCO and OCHuman datasets demonstrate VisualCent's accuracy and real-time performance advantages, outperforming existing methods in mAP scores and execution frame rate per second. The implementation is available on the project page††https://sites.google.com/view/niazahmad/projects/visualcent © 2025 Elsevier B.V., All rights reserved. | - |
dc.language | 영어 | - |
dc.language.iso | ENG | - |
dc.publisher | Institute of Electrical and Electronics Engineers Inc. | - |
dc.title | VisualCent: Visual Human Analysis using Dynamic Centroid Representation | - |
dc.type | Article | - |
dc.publisher.location | 미국 | - |
dc.identifier.doi | 10.1109/FG61629.2025.11099400 | - |
dc.identifier.scopusid | 2-s2.0-105014498539 | - |
dc.identifier.bibliographicCitation | 2025 IEEE 19th International Conference on Automatic Face and Gesture Recognition (FG) | - |
dc.citation.title | 2025 IEEE 19th International Conference on Automatic Face and Gesture Recognition (FG) | - |
dc.type.docType | Conference paper | - |
dc.description.isOpenAccess | N | - |
dc.description.journalRegisteredClass | scopus | - |
dc.subject.keywordAuthor | Disk Representation | - |
dc.subject.keywordAuthor | Frame-rate | - |
dc.subject.keywordAuthor | Heatmaps | - |
dc.subject.keywordAuthor | Human Analysis | - |
dc.subject.keywordAuthor | Human Body Movement | - |
dc.subject.keywordAuthor | Human Pose | - |
dc.subject.keywordAuthor | Keypoint Detection | - |
dc.subject.keywordAuthor | Keypoints | - |
dc.subject.keywordAuthor | Real Time Performance | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
55 Hanyangdeahak-ro, Sangnok-gu, Ansan, Gyeonggi-do, 15588, Korea+82-31-400-4269 sweetbrain@hanyang.ac.kr
COPYRIGHT © 2021 HANYANG UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.