Robust kernel-based feature representation for 3D point cloud analysis via circular convolutional network
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Jung, S. | - |
dc.contributor.author | Shin, Y.-G. | - |
dc.contributor.author | Chung, M. | - |
dc.date.accessioned | 2023-05-25T08:40:04Z | - |
dc.date.available | 2023-05-25T08:40:04Z | - |
dc.date.issued | 2023-06 | - |
dc.identifier.issn | 1077-3142 | - |
dc.identifier.issn | 1090-235X | - |
dc.identifier.uri | https://scholarworks.bwise.kr/ssu/handle/2018.sw.ssu/43911 | - |
dc.description.abstract | Feature descriptors of point clouds are used in several applications, such as registration and part segmentation of 3D point clouds. Learning representations of local geometric features is unquestionably the most important task for accurate point cloud analyses. However, it is challenging to develop rotation or scale-invariant descriptors. Most previous studies have either ignored rotations or empirically studied optimal scale parameters, which hinders the applicability of the methods for real-world datasets. In this paper, we present a new local feature description method that is robust to rotation and scale variations. Moreover, we improve representations based on a global aggregation method. First, we place kernels aligned around each point in the normal direction. To avoid the sign problem of the normal vector, we use a symmetric kernel point distribution in the tangential plane. From each kernel point, we first project the points from the spatial space to the feature space, which is robust to multiple scales and rotation, based on angles and distances. Subsequently, we perform convolutions by considering local kernel point structures and long-range global context, obtained by a global aggregation method. We experimented with our proposed descriptors on benchmark datasets (i.e., ModelNet40 and ShapeNetPart) to evaluate the performance of registration, classification, and part segmentation on 3D point clouds. Our method showed superior performances when compared to the state-of-the-art methods by reducing 70% of the rotation and translation errors in the registration task. Our method also showed comparable performance in the classification and part-segmentation tasks without any external data augmentation. © 2023 Elsevier Inc. | - |
dc.language | 영어 | - |
dc.language.iso | ENG | - |
dc.publisher | Academic Press Inc. | - |
dc.title | Robust kernel-based feature representation for 3D point cloud analysis via circular convolutional network | - |
dc.type | Article | - |
dc.identifier.doi | 10.1016/j.cviu.2023.103678 | - |
dc.identifier.bibliographicCitation | Computer Vision and Image Understanding, v.231 | - |
dc.identifier.wosid | 000972664500001 | - |
dc.identifier.scopusid | 2-s2.0-85150289670 | - |
dc.citation.title | Computer Vision and Image Understanding | - |
dc.citation.volume | 231 | - |
dc.publisher.location | 미국 | - |
dc.type.docType | Article | - |
dc.description.isOpenAccess | Y | - |
dc.subject.keywordAuthor | 3D point cloud analysis | - |
dc.subject.keywordAuthor | Angle-based kernel convolutions | - |
dc.subject.keywordAuthor | Global context aggregation | - |
dc.subject.keywordAuthor | Rotation-robust point descriptor | - |
dc.subject.keywordAuthor | Scale adaptation | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalResearchArea | Engineering | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Artificial Intelligence | - |
dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
Soongsil University Library 369 Sangdo-Ro, Dongjak-Gu, Seoul, Korea (06978)02-820-0733
COPYRIGHT ⓒ SOONGSIL UNIVERSITY, ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.