물체 파지점 검출 향상을 위한 분할 기반 깊이 지도 조정
DC Field | Value | Language |
---|---|---|
dc.contributor.author | 신현수 | - |
dc.contributor.author | 무하마드 라힐 아파잘 | - |
dc.contributor.author | 이성온 | - |
dc.date.accessioned | 2024-04-03T08:30:36Z | - |
dc.date.available | 2024-04-03T08:30:36Z | - |
dc.date.issued | 2024-02 | - |
dc.identifier.issn | 1975-6291 | - |
dc.identifier.issn | 2287-3961 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/erica/handle/2021.sw.erica/118408 | - |
dc.description.abstract | Robotic grasping in unstructured environments poses a significant challenge, demanding precise estimation of gripping positions for diverse and unknown objects. Generative Grasping Convolution Neural Network (GG-CNN) can estimate the position and direction that can be gripped by a robot gripper for an unknown object based on a three-dimensional depth map. Since GG-CNN uses only a depth map as an input, the precision of the depth map is the most critical factor affecting the result. To address the challenge of depth map precision, we integrate the Segment Anything Model renowned for its robust zero-shot performance across various segmentation tasks. We adjust the components corresponding to the segmented areas in the depth map aligned through external calibration. The proposed method was validated on the Cornell dataset and SurgicalKit dataset. Quantitative analysis compared to existing methods showed a 49.8% improvement with the dataset including surgical instruments. The results highlight the practical importance of our approach, especially in scenarios involving thin and metallic objects. | - |
dc.description.abstract | Robotic grasping in unstructured environments poses a significant challenge, demanding precise estimation of gripping positions for diverse and unknown objects. Generative Grasping Convolution Neural Network (GG-CNN) can estimate the position and direction that can be gripped by a robot gripper for an unknown object based on a three-dimensional depth map. Since GG-CNN uses only a depth map as an input, the precision of the depth map is the most critical factor affecting the result. To address the challenge of depth map precision, we integrate the Segment Anything Model renowned for its robust zero-shot performance across various segmentation tasks. We adjust the components corresponding to the segmented areas in the depth map aligned through external calibration. The proposed method was validated on the Cornell dataset and SurgicalKit dataset. Quantitative analysis compared to existing methods showed a 49.8% improvement with the dataset including surgical instruments. The results highlight the practical importance of our approach, especially in scenarios involving thin and metallic objects. | - |
dc.format.extent | 7 | - |
dc.language | 한국어 | - |
dc.language.iso | KOR | - |
dc.publisher | 한국로봇학회 | - |
dc.title | 물체 파지점 검출 향상을 위한 분할 기반 깊이 지도 조정 | - |
dc.title.alternative | Segmentation-Based Depth Map Adjustment for Improved Grasping Pose Detection | - |
dc.type | Article | - |
dc.publisher.location | 대한민국 | - |
dc.identifier.doi | 10.7746/jkros.2024.19.1.016 | - |
dc.identifier.bibliographicCitation | 로봇학회 논문지, v.19, no.1, pp 16 - 22 | - |
dc.citation.title | 로봇학회 논문지 | - |
dc.citation.volume | 19 | - |
dc.citation.number | 1 | - |
dc.citation.startPage | 16 | - |
dc.citation.endPage | 22 | - |
dc.identifier.kciid | ART003055496 | - |
dc.description.isOpenAccess | N | - |
dc.description.journalRegisteredClass | kci | - |
dc.subject.keywordAuthor | Segmentation | - |
dc.subject.keywordAuthor | Deep Learning | - |
dc.subject.keywordAuthor | Robotic Grasping | - |
dc.identifier.url | https://jkros.org/_common/do.php?a=full&b=33&bidx=3565&aidx=39599 | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
55 Hanyangdeahak-ro, Sangnok-gu, Ansan, Gyeonggi-do, 15588, Korea+82-31-400-4269 sweetbrain@hanyang.ac.kr
COPYRIGHT © 2021 HANYANG UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.