Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

물체 파지점 검출 향상을 위한 분할 기반 깊이 지도 조정Segmentation-Based Depth Map Adjustment for Improved Grasping Pose Detection

Other Titles
Segmentation-Based Depth Map Adjustment for Improved Grasping Pose Detection
Authors
신현수무하마드 라힐 아파잘이성온
Issue Date
Feb-2024
Publisher
한국로봇학회
Keywords
Segmentation; Deep Learning; Robotic Grasping
Citation
로봇학회 논문지, v.19, no.1, pp 16 - 22
Pages
7
Indexed
KCI
Journal Title
로봇학회 논문지
Volume
19
Number
1
Start Page
16
End Page
22
URI
https://scholarworks.bwise.kr/erica/handle/2021.sw.erica/118408
DOI
10.7746/jkros.2024.19.1.016
ISSN
1975-6291
2287-3961
Abstract
Robotic grasping in unstructured environments poses a significant challenge, demanding precise estimation of gripping positions for diverse and unknown objects. Generative Grasping Convolution Neural Network (GG-CNN) can estimate the position and direction that can be gripped by a robot gripper for an unknown object based on a three-dimensional depth map. Since GG-CNN uses only a depth map as an input, the precision of the depth map is the most critical factor affecting the result. To address the challenge of depth map precision, we integrate the Segment Anything Model renowned for its robust zero-shot performance across various segmentation tasks. We adjust the components corresponding to the segmented areas in the depth map aligned through external calibration. The proposed method was validated on the Cornell dataset and SurgicalKit dataset. Quantitative analysis compared to existing methods showed a 49.8% improvement with the dataset including surgical instruments. The results highlight the practical importance of our approach, especially in scenarios involving thin and metallic objects.
Robotic grasping in unstructured environments poses a significant challenge, demanding precise estimation of gripping positions for diverse and unknown objects. Generative Grasping Convolution Neural Network (GG-CNN) can estimate the position and direction that can be gripped by a robot gripper for an unknown object based on a three-dimensional depth map. Since GG-CNN uses only a depth map as an input, the precision of the depth map is the most critical factor affecting the result. To address the challenge of depth map precision, we integrate the Segment Anything Model renowned for its robust zero-shot performance across various segmentation tasks. We adjust the components corresponding to the segmented areas in the depth map aligned through external calibration. The proposed method was validated on the Cornell dataset and SurgicalKit dataset. Quantitative analysis compared to existing methods showed a 49.8% improvement with the dataset including surgical instruments. The results highlight the practical importance of our approach, especially in scenarios involving thin and metallic objects.
Files in This Item
Go to Link
Appears in
Collections
COLLEGE OF ENGINEERING SCIENCES > DEPARTMENT OF ROBOT ENGINEERING > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Lee, Sung on photo

Lee, Sung on
ERICA 공학대학 (DEPARTMENT OF ROBOT ENGINEERING)
Read more

Altmetrics

Total Views & Downloads

BROWSE