Sparse Actor-Critic: Sparse Tsallis Entropy Regularized Reinforcement Learning in a Continuous Action Space
- Authors
- Choy, Jaegoo; Lee, Kyungjae; Oh, Songhwai
- Issue Date
- Jun-2020
- Publisher
- IEEE
- Citation
- 2020 17TH INTERNATIONAL CONFERENCE ON UBIQUITOUS ROBOTS (UR), pp 68 - 73
- Pages
- 6
- Journal Title
- 2020 17TH INTERNATIONAL CONFERENCE ON UBIQUITOUS ROBOTS (UR)
- Start Page
- 68
- End Page
- 73
- URI
- https://scholarworks.bwise.kr/cau/handle/2019.sw.cau/59360
- DOI
- 10.1109/UR49135.2020.9144780
- ISSN
- 2325-033X
- Abstract
- In case of deep reinforcement learning (RL) algorithms, to achieve high performance in complex continuous control tasks, it is necessary to exploit the goal and at the same time explore the environment. In this paper, we introduce a novel off-policy actor-critic reinforcement learning algorithm with a sparse Tsallis entropy regularizer. The sparse Tsallis entropy regularizer has the effect of maximizing the expected returns while maximizing the sparse Tsallis entropy for its policy function. Maximizing the sparse Tsallis entropy makes the actor to explore the large action and state space efficiently, thus it helps us to find the optimal action at each state. We derive the iteration update rules and modify a policy iteration rule for an off-policy method. In experiments, we demonstrate the effectiveness of the proposed method in continuous reinforcement learning problems in terms of the convergence speed. The proposed method outperforms former on-policy and off-policy RL algorithms in terms of the convergence speed and performance.
- Files in This Item
- There are no files associated with this item.
- Appears in
Collections - College of Software > Department of Artificial Intelligence > 1. Journal Articles
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.