Learning cooperative dynamic manipulation skills from human demonstration videos
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Iodice, Francesco | - |
dc.contributor.author | Wu, Yuqiang | - |
dc.contributor.author | Kim, Wansoo | - |
dc.contributor.author | Zhao, Fei | - |
dc.contributor.author | De Momi, Elena | - |
dc.contributor.author | Ajoudani, Arash | - |
dc.date.accessioned | 2022-07-06T02:43:54Z | - |
dc.date.available | 2022-07-06T02:43:54Z | - |
dc.date.created | 2022-06-08 | - |
dc.date.issued | 2022-08 | - |
dc.identifier.issn | 0957-4158 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/erica/handle/2021.sw.erica/107599 | - |
dc.description.abstract | This article proposes a method for learning and robotic replication of dynamic collaborative tasks from offline videos. The objective is to extend the concept of learning from demonstration (LfD) to dynamic scenarios, benefiting from widely available or easily producible offline videos. To achieve this goal, we decode important dynamic information, such as the Configuration Dependent Stiffness (CDS), which reveals the contribution of arm pose to the arm endpoint stiffness, from a three-dimensional human skeleton model. Next, through encoding of the CDS via Gaussian Mixture Model (GMM) and decoding via Gaussian Mixture Regression (GMR), the robot's Cartesian impedance profile is estimated and replicated. We demonstrate the proposed method in a collaborative sawing task with leader-follower structure, considering environmental constraints and dynamic uncertainties. The experimental setup includes two Panda robots, which replicate the leader-follower roles and the impedance profiles extracted from a two-persons sawing video. | - |
dc.language | 영어 | - |
dc.language.iso | en | - |
dc.publisher | PERGAMON-ELSEVIER SCIENCE LTD | - |
dc.title | Learning cooperative dynamic manipulation skills from human demonstration videos | - |
dc.type | Article | - |
dc.contributor.affiliatedAuthor | Kim, Wansoo | - |
dc.identifier.doi | 10.1016/j.mechatronics.2022.102807 | - |
dc.identifier.wosid | 000797719400001 | - |
dc.identifier.bibliographicCitation | MECHATRONICS, v.85, pp.1 - 10 | - |
dc.relation.isPartOf | MECHATRONICS | - |
dc.citation.title | MECHATRONICS | - |
dc.citation.volume | 85 | - |
dc.citation.startPage | 1 | - |
dc.citation.endPage | 10 | - |
dc.type.rims | ART | - |
dc.type.docType | Article | - |
dc.description.journalClass | 1 | - |
dc.description.isOpenAccess | N | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Automation & Control Systems | - |
dc.relation.journalResearchArea | Engineering | - |
dc.relation.journalResearchArea | Robotics | - |
dc.relation.journalWebOfScienceCategory | Automation & Control Systems | - |
dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
dc.relation.journalWebOfScienceCategory | Engineering, Mechanical | - |
dc.relation.journalWebOfScienceCategory | Robotics | - |
dc.subject.keywordPlus | STIFFNESS | - |
dc.subject.keywordPlus | TELEOPERATION | - |
dc.subject.keywordPlus | FORCE | - |
dc.subject.keywordPlus | TASKS | - |
dc.subject.keywordAuthor | Transfer learning | - |
dc.subject.keywordAuthor | Multi-agent systems | - |
dc.subject.keywordAuthor | 3D pose estimation | - |
dc.subject.keywordAuthor | Visual imitation | - |
dc.subject.keywordAuthor | Human action | - |
dc.identifier.url | https://www.sciencedirect.com/science/article/pii/S0957415822000502 | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
55 Hanyangdeahak-ro, Sangnok-gu, Ansan, Gyeonggi-do, 15588, Korea+82-31-400-4269 sweetbrain@hanyang.ac.kr
COPYRIGHT © 2021 HANYANG UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.