Acceleration of Actor-Critic Deep Reinforcement Learning for Visual Grasping by State Representation Learning Based on a Preprocessed Input Image
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Kim, Taewon | - |
dc.contributor.author | Park, Yeseong | - |
dc.contributor.author | Park, Youngbin | - |
dc.contributor.author | Lee, Sang Hyoung | - |
dc.contributor.author | Suh, Il Hong | - |
dc.date.accessioned | 2022-07-06T10:53:21Z | - |
dc.date.available | 2022-07-06T10:53:21Z | - |
dc.date.created | 2022-04-06 | - |
dc.date.issued | 2021-12 | - |
dc.identifier.issn | 2153-0858 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/hanyang/handle/2021.sw.hanyang/140047 | - |
dc.description.abstract | For robotic grasping tasks with diverse target objects, some deep learning-based methods have achieved state-of-the-art results using direct visual input. In contrast, actor-critic deep reinforcement learning (RL) methods typically perform very poorly when applied to grasp diverse objects, especially when learning from raw images and sparse rewards. To render these RL techniques feasible for vision-based grasping tasks, we used state representation learning (SRL), in which we encode essential information for subsequent use in RL. However, typical representation learning procedures are unsuitable for extracting pertinent information for learning grasping skills owing to the high complexity of visual inputs for representation learning, in which a robot attempts to grasp a target object. We found that the proposed preprocessed input image is the key to capturing effectively a compact representation. This enables deep RL to learn robotic grasping skills from highly varied and diverse visual inputs. Further, we demonstrate the effectiveness of the proposed approach with varying levels of preprocessing in a realistic simulated environment. We also describe how the resulting model can be transferred to a real-world robot and also demonstrate a 68% success rate on real-world grasp attempts. | - |
dc.language | 영어 | - |
dc.language.iso | en | - |
dc.publisher | IEEE | - |
dc.title | Acceleration of Actor-Critic Deep Reinforcement Learning for Visual Grasping by State Representation Learning Based on a Preprocessed Input Image | - |
dc.type | Article | - |
dc.contributor.affiliatedAuthor | Kim, Taewon | - |
dc.identifier.doi | 10.1109/IROS51168.2021.9635931 | - |
dc.identifier.scopusid | 2-s2.0-85124368128 | - |
dc.identifier.wosid | 000755125500021 | - |
dc.identifier.bibliographicCitation | 2021 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), pp.198 - 205 | - |
dc.relation.isPartOf | 2021 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS) | - |
dc.citation.title | 2021 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS) | - |
dc.citation.startPage | 198 | - |
dc.citation.endPage | 205 | - |
dc.type.rims | ART | - |
dc.type.docType | Proceedings Paper | - |
dc.description.journalClass | 1 | - |
dc.description.isOpenAccess | N | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Automation & Control Systems | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalResearchArea | Engineering | - |
dc.relation.journalResearchArea | Robotics | - |
dc.relation.journalWebOfScienceCategory | Automation & Control Systems | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Artificial Intelligence | - |
dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
dc.relation.journalWebOfScienceCategory | Robotics | - |
dc.subject.keywordPlus | End effectors | - |
dc.subject.keywordPlus | Intelligent robots | - |
dc.subject.keywordPlus | Reinforcement learning | - |
dc.subject.keywordPlus | Robotics | - |
dc.subject.keywordPlus | Actor critic | - |
dc.subject.keywordPlus | Diverse objects | - |
dc.subject.keywordPlus | Input image | - |
dc.subject.keywordPlus | Learning-based methods | - |
dc.subject.keywordPlus | Real-world | - |
dc.subject.keywordPlus | Reinforcement learning method | - |
dc.subject.keywordPlus | Robotic grasping | - |
dc.subject.keywordPlus | State of the art | - |
dc.subject.keywordPlus | State representation | - |
dc.subject.keywordPlus | Target object | - |
dc.subject.keywordPlus | Deep learning | - |
dc.identifier.url | https://ieeexplore.ieee.org/document/9635931 | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
222, Wangsimni-ro, Seongdong-gu, Seoul, 04763, Korea+82-2-2220-1365
COPYRIGHT © 2021 HANYANG UNIVERSITY.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.