Multitask Learning for Multiple Recognition Tasks: A Framework for Lower-limb Exoskeleton Robot Applications
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Kim, Joonhyun | - |
dc.contributor.author | Ha, Seongmin | - |
dc.contributor.author | Shin, Dongbin | - |
dc.contributor.author | Ham, Seoyeon | - |
dc.contributor.author | Jang, Jaepil | - |
dc.contributor.author | Kim, Wansoo | - |
dc.date.accessioned | 2024-01-10T01:30:40Z | - |
dc.date.available | 2024-01-10T01:30:40Z | - |
dc.date.issued | 2024-01 | - |
dc.identifier.issn | 1944-9445 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/erica/handle/2021.sw.erica/116395 | - |
dc.description.abstract | To control the lower-limb exoskeleton robot effectively, it is essential to accurately recognize user status and environmental conditions. Previous studies have typically addressed these recognition challenges through independent models for each task, resulting in an inefficient model development process. In this study, we propose a Multitask learning approach that can address multiple recognition challenges simultaneously. This approach can enhance data efficiency by enabling knowledge sharing between each recognition model. We demonstrate the effectiveness of this approach using Gait phase recognition (GPR) and Terrain classification (TC) as examples, the most conventional recognition tasks in lower-limb exoskeleton robots. We first created a high-performing GPR model that achieved a Root mean square error (RMSE) value of 2.345 +/- 0.08 and then utilized its knowledge-sharing backbone feature network to learn a TC model with an extremely limited dataset. Using a limited dataset for the TC model allows us to validate the data efficiency of our proposed Multitask learning approach. We compared the accuracy of the proposed TC model against other TC baseline models. The proposed model achieved 99.5 +/- 0.044% accuracy with a limited dataset, outperforming other baseline models, demonstrating its effectiveness in terms of data efficiency. Future research will focus on extending the Multitask learning framework to encompass additional recognition tasks. | - |
dc.format.extent | 7 | - |
dc.language | 영어 | - |
dc.language.iso | ENG | - |
dc.publisher | IEEE | - |
dc.title | Multitask Learning for Multiple Recognition Tasks: A Framework for Lower-limb Exoskeleton Robot Applications | - |
dc.type | Article | - |
dc.publisher.location | 미국 | - |
dc.identifier.doi | 10.1109/RO-MAN57019.2023.10309384 | - |
dc.identifier.scopusid | 2-s2.0-85186978764 | - |
dc.identifier.wosid | 001108678600192 | - |
dc.identifier.bibliographicCitation | 2023 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), pp 1530 - 1536 | - |
dc.citation.title | 2023 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN) | - |
dc.citation.startPage | 1530 | - |
dc.citation.endPage | 1536 | - |
dc.type.docType | Proceedings Paper | - |
dc.description.isOpenAccess | N | - |
dc.description.journalRegisteredClass | scie | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalResearchArea | Engineering | - |
dc.relation.journalResearchArea | Robotics | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Artificial Intelligence | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Cybernetics | - |
dc.relation.journalWebOfScienceCategory | Ergonomics | - |
dc.relation.journalWebOfScienceCategory | Robotics | - |
dc.identifier.url | https://ieeexplore.ieee.org/document/10309384 | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
55 Hanyangdeahak-ro, Sangnok-gu, Ansan, Gyeonggi-do, 15588, Korea+82-31-400-4269 sweetbrain@hanyang.ac.kr
COPYRIGHT © 2021 HANYANG UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.