Table-Balancing Cooperative Robot Based on Deep Reinforcement Learningopen access
- Authors
- Kim, Yewon; Kim, Dae-Won; Kang, Bo-Yeong
- Issue Date
- May-2023
- Publisher
- MDPI
- Keywords
- cooperative robot; deep Q-network; human–robot interaction; reinforcement learning
- Citation
- Sensors, v.23, no.11
- Journal Title
- Sensors
- Volume
- 23
- Number
- 11
- URI
- https://scholarworks.bwise.kr/cau/handle/2019.sw.cau/67273
- DOI
- 10.3390/s23115235
- ISSN
- 1424-8220
1424-3210
- Abstract
- Reinforcement learning is one of the artificial intelligence methods that enable robots to judge and operate situations on their own by learning to perform tasks. Previous reinforcement learning research has mainly focused on tasks performed by individual robots; however, everyday tasks, such as balancing tables, often require cooperation between two individuals to avoid injury when moving. In this research, we propose a deep reinforcement learning-based technique for robots to perform a table-balancing task in cooperation with a human. The cooperative robot proposed in this paper recognizes human behavior to balance the table. This recognition is achieved by utilizing the robot’s camera to take an image of the state of the table, then the table-balance action is performed afterward. Deep Q-network (DQN) is a deep reinforcement learning technology applied to cooperative robots. As a result of learning table balancing, on average, the cooperative robot showed a 90% optimal policy convergence rate in 20 runs of training with optimal hyperparameters applied to DQN-based techniques. In the H/W experiment, the trained DQN-based robot achieved an operation precision of 90%, thus verifying its excellent performance. © 2023 by the authors.
- Files in This Item
-
- Appears in
Collections - College of Software > School of Computer Science and Engineering > 1. Journal Articles
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.