Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Exploration Method for Reducing Uncertainty using Q-entropy in Deep Reinforcement Learning

Authors
황성운
Issue Date
17-Jan-2018
Publisher
ICGHIT
Citation
ICGHIT 프러시딩, v.1, no.1, pp.269 - 271
Journal Title
ICGHIT 프러시딩
Volume
1
Number
1
Start Page
269
End Page
271
URI
https://scholarworks.bwise.kr/hongik/handle/2020.sw.hongik/4085
Abstract
In this paper, we propose a novel exploration method for a Q-learning based deep reinforcement learning. The agent decides whether to explore or exploit according to the uncertainty on current state. To measure the amount of uncertainty, we utilize entropy value of action-values at each state. That is, the agent explore with random actions when the entropy of action-values is high or does greedy action when the entropy is low. Also we adopt the state visit-counter to handle the ambiguous states which means that several optimal actions exist.
Files in This Item
There are no files associated with this item.
Appears in
Collections
College of Science and Technology > Department of Computer and Information Communications Engineering > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Altmetrics

Total Views & Downloads

BROWSE