Optimal algorithms for stochastic multi-armed bandits with heavy tailed rewards
- Authors
- Lee, K.; Yang, H.; Lim, S.; Oh, S.
- Issue Date
- Dec-2020
- Publisher
- Neural information processing systems foundation
- Citation
- Advances in Neural Information Processing Systems, v.2020-December
- Journal Title
- Advances in Neural Information Processing Systems
- Volume
- 2020-December
- URI
- https://scholarworks.bwise.kr/cau/handle/2019.sw.cau/59357
- ISSN
- 1049-5258
- Abstract
- In this paper, we consider stochastic multi-armed bandits (MABs) with heavy-tailed rewards, whose p-th moment is bounded by a constant ?p for 1 < p = 2. First, we propose a novel robust estimator which does not require ?p as prior information, while other existing robust estimators demand prior knowledge about ?p. We show that an error probability of the proposed estimator decays exponentially fast. Using this estimator, we propose a perturbation-based exploration strategy and develop a generalized regret analysis scheme that provides upper and lower regret bounds by revealing the relationship between the regret and the cumulative density function of the perturbation. From the proposed analysis scheme, we obtain gap-dependent and gap-independent upper and lower regret bounds of various perturbations. We also find the optimal hyperparameters for each perturbation, which can achieve the minimax optimal regret bound with respect to total rounds. In simulation, the proposed estimator shows favorable performance compared to existing robust estimators for various p values and, for MAB problems, the proposed perturbation strategy outperforms existing exploration methods. © 2020 Neural information processing systems foundation.
- Files in This Item
- There are no files associated with this item.
- Appears in
Collections - College of Software > Department of Artificial Intelligence > 1. Journal Articles
![qrcode](https://api.qrserver.com/v1/create-qr-code/?size=55x55&data=https://scholarworks.bwise.kr/cau/handle/2019.sw.cau/59357)
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.