Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Deep reinforcement learning for cooperative robots based on adaptive sentiment feedback

Full metadata record
DC Field Value Language
dc.contributor.authorJeon, Haein-
dc.contributor.authorKim, Dae-Won-
dc.contributor.authorKang, Bo-Yeong-
dc.date.accessioned2024-01-24T05:01:28Z-
dc.date.available2024-01-24T05:01:28Z-
dc.date.issued2024-06-
dc.identifier.issn0957-4174-
dc.identifier.issn1873-6793-
dc.identifier.urihttps://scholarworks.bwise.kr/cau/handle/2019.sw.cau/71331-
dc.description.abstractHuman–robot cooperative tasks have gained importance with the emergence of robotics and artificial intelligence technology. In interactive reinforcement learning techniques, robots learn target tasks by receiving feedback from an experienced human trainer. However, most interactive reinforcement learning studies require a separate process to integrate the trainer's feedback into the training dataset, making it challenging for robots to learn new tasks from humans in real-time. Furthermore, the types of feedback sentences that trainers can use are limited in previous research. To address these limitations, this paper proposes a robot teaching strategy that uses deep RL via human–robot interaction to learn table balancing tasks interactively. The proposed system employs Deep Q-Network with real-time sentiment feedback delivered through the trainer's speech to learn cooperative tasks. We designed a novel reward function that incorporates sentiment feedback from human speech in real-time during the learning process. The paper presents an improved reward shaping technique based on subdivided feedback levels and shrinking feedback. This function serves as a guide for the robot to engage in natural interactions with humans and enables it to learn the tasks effectively. Experimental results demonstrate that the proposed interactive deep reinforcement learning model achieved a high success rate of up to 99.06%, outperforming the model without sentiment feedback. © 2023-
dc.language영어-
dc.language.isoENG-
dc.publisherElsevier Ltd-
dc.titleDeep reinforcement learning for cooperative robots based on adaptive sentiment feedback-
dc.typeArticle-
dc.identifier.doi10.1016/j.eswa.2023.121198-
dc.identifier.bibliographicCitationExpert Systems with Applications, v.243-
dc.description.isOpenAccessN-
dc.identifier.wosid001139775500001-
dc.identifier.scopusid2-s2.0-85179581967-
dc.citation.titleExpert Systems with Applications-
dc.citation.volume243-
dc.type.docTypeArticle-
dc.publisher.location영국-
dc.subject.keywordAuthorDeep reinforcement learning-
dc.subject.keywordAuthorHuman-in-the-loop-
dc.subject.keywordAuthorHuman–robot interaction-
dc.subject.keywordAuthorInteractive reinforcement learning-
dc.subject.keywordAuthorReward shaping-
dc.relation.journalResearchAreaComputer Science-
dc.relation.journalResearchAreaEngineering-
dc.relation.journalResearchAreaOperations Research & Management Science-
dc.relation.journalWebOfScienceCategoryComputer Science, Artificial Intelligence-
dc.relation.journalWebOfScienceCategoryEngineering, Electrical & Electronic-
dc.relation.journalWebOfScienceCategoryOperations Research & Management Science-
dc.description.journalRegisteredClassscopus-
Files in This Item
There are no files associated with this item.
Appears in
Collections
College of Software > School of Computer Science and Engineering > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Kim, Dae-Won photo

Kim, Dae-Won
소프트웨어대학 (소프트웨어학부)
Read more

Altmetrics

Total Views & Downloads

BROWSE