An Enhanced Proximal Policy Optimization-Based Reinforcement Learning Method with Random Forest for Hyperparameter Optimizationopen access
- Authors
- Ma, Zhixin; Cui, Shengmin; Joe, Inwhee
- Issue Date
- Jul-2022
- Publisher
- MDPI
- Keywords
- hyperparameter optimization (HPO); proximal policy optimization; random forest; reinforcement learning
- Citation
- APPLIED SCIENCES-BASEL, v.12, no.14, pp.1 - 20
- Indexed
- SCIE
SCOPUS
- Journal Title
- APPLIED SCIENCES-BASEL
- Volume
- 12
- Number
- 14
- Start Page
- 1
- End Page
- 20
- URI
- https://scholarworks.bwise.kr/hanyang/handle/2021.sw.hanyang/186187
- DOI
- 10.3390/app12147006
- ISSN
- 2076-3417
- Abstract
- For most machine learning and deep learning models, the selection of hyperparameters has a significant impact on the performance of the model. Therefore, deep learning and data analysis experts have to spend a lot of time on hyperparameter tuning when building a model for accomplishing a task. Although there are many algorithms used to solve hyperparameter optimization (HPO), these methods require the results of the actual trials at each epoch to help perform the search. To reduce the number of trials, model-based reinforcement learning adopts multilayer perceptron (MLP) to capture the relationship between hyperparameter settings and model performance. However, MLP needs to be carefully designed because there is a risk of overfitting. Thus, we propose a random forest-enhanced proximal policy optimization (RFEPPO) reinforcement learning algorithm to solve the HPO problem. In addition, reinforcement learning as a solution to HPO will encounter the sparse reward problem, eventually leading to slow convergence. To address this problem, we employ the intrinsic reward, which introduces the prediction error as the reward signal. Experiments carried on nine tabular datasets and two image classification datasets demonstrate the effectiveness of our model.
- Files in This Item
-
- Appears in
Collections - 서울 공과대학 > 서울 컴퓨터소프트웨어학부 > 1. Journal Articles
![qrcode](https://api.qrserver.com/v1/create-qr-code/?size=55x55&data=https://scholarworks.bwise.kr/hanyang/handle/2021.sw.hanyang/186187)
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.