Integration of deep learning-based object recognition and robot manipulator for grasping objects
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Shin, Hyunsoo | - |
dc.contributor.author | Hwang, Hyunho | - |
dc.contributor.author | Yoon, Hyunseok | - |
dc.contributor.author | Lee, Sungon | - |
dc.date.accessioned | 2021-06-22T11:02:07Z | - |
dc.date.available | 2021-06-22T11:02:07Z | - |
dc.date.issued | 2019-06 | - |
dc.identifier.issn | 2325-033X | - |
dc.identifier.uri | https://scholarworks.bwise.kr/erica/handle/2021.sw.erica/4603 | - |
dc.description.abstract | Many industrial robots have been applied relatively simple operation that requires repeatable tasks. However, through the rapid development of deep learning with 4th industrial revolution, it prospects that a role of robots will be extended, and the robots are expected to do task that a human can do. For example, in the service robotics, it is being studied about shelf-stocking and replenishment of many types of products such as food or cleaning rooms at home automatically. In this paper, we integrated object recognition system using deep learning approach with grasping system which has a serial manipulator and gripper for the basis of these future technology.We conduct a bin-picking, the bin-picking is to pick-up some objects and move to bin using an integrated system. In this task, we adopt the Mask R-CNN [3], which is widely used in the field of object segmentation, to determine the kind and shape of an object. After this segmentation on the image, we obtain poses of objects using center of gravity and a simple algorithm for orientation. We assume that the objects exist on not 3D space but a plane, so we only determine the direction of rotation of the object in a plane. Also, in order to position the camera flexibly, a marker is attached to the robot and the transformation between the camera and the robot is registered using this marker. Bin-picking experiments on four household objects took an average of 30 seconds per object. Although this time could be shortened, the speed of the manipulator was limited in consideration of safety.Through these experiments, we were able to verify an effectiveness of this integration which has an object recognition using deep learning and grasping system. On the future work, we aim to complete the 6D pose estimation for sophisticated work. © 2019 IEEE. | - |
dc.format.extent | 5 | - |
dc.language | 영어 | - |
dc.language.iso | ENG | - |
dc.publisher | Institute of Electrical and Electronics Engineers Inc. | - |
dc.title | Integration of deep learning-based object recognition and robot manipulator for grasping objects | - |
dc.type | Article | - |
dc.publisher.location | 미국 | - |
dc.identifier.doi | 10.1109/URAI.2019.8768650 | - |
dc.identifier.scopusid | 2-s2.0-85070536861 | - |
dc.identifier.bibliographicCitation | 2019 16th International Conference on Ubiquitous Robots, UR 2019, pp 174 - 178 | - |
dc.citation.title | 2019 16th International Conference on Ubiquitous Robots, UR 2019 | - |
dc.citation.startPage | 174 | - |
dc.citation.endPage | 178 | - |
dc.type.docType | Conference Paper | - |
dc.description.isOpenAccess | N | - |
dc.description.journalRegisteredClass | scopus | - |
dc.subject.keywordPlus | Cameras | - |
dc.subject.keywordPlus | Image segmentation | - |
dc.subject.keywordPlus | Industrial robots | - |
dc.subject.keywordPlus | Manipulators | - |
dc.subject.keywordPlus | Object recognition | - |
dc.subject.keywordPlus | Robot applications | - |
dc.subject.keywordPlus | Center of gravity | - |
dc.subject.keywordPlus | Direction of rotation | - |
dc.subject.keywordPlus | Industrial revolutions | - |
dc.subject.keywordPlus | Integrated systems | - |
dc.subject.keywordPlus | Learning approach | - |
dc.subject.keywordPlus | Object recognition systems | - |
dc.subject.keywordPlus | Object segmentation | - |
dc.subject.keywordPlus | Serial manipulators | - |
dc.subject.keywordPlus | Deep learning | - |
dc.identifier.url | https://ieeexplore.ieee.org/document/8768650 | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
55 Hanyangdeahak-ro, Sangnok-gu, Ansan, Gyeonggi-do, 15588, Korea+82-31-400-4269 sweetbrain@hanyang.ac.kr
COPYRIGHT © 2021 HANYANG UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.