Interpreting pretext tasks for active learning: a reinforcement learning approachopen access
- Authors
- Kim, Dongjoo; Lee, Minsik
- Issue Date
- Oct-2024
- Publisher
- Nature Research
- Citation
- Scientific Reports, v.14, no.1, pp 1 - 18
- Pages
- 18
- Indexed
- SCIE
SCOPUS
- Journal Title
- Scientific Reports
- Volume
- 14
- Number
- 1
- Start Page
- 1
- End Page
- 18
- URI
- https://scholarworks.bwise.kr/erica/handle/2021.sw.erica/121279
- DOI
- 10.1038/s41598-024-76864-2
- ISSN
- 2045-2322
2045-2322
- Abstract
- As the amount of labeled data increases, the performance of deep neural networks tends to improve. However, annotating a large volume of data can be expensive. Active learning addresses this challenge by selectively annotating unlabeled data. There have been recent attempts to incorporate self-supervised learning into active learning, but there are issues in utilizing the results of self-supervised learning, i.e., it is uncertain how these should be interpreted in the context of active learning. To address this issue, we propose a multi-armed bandit approach to handle the information provided by self-supervised learning in active learning. Furthermore, we devise a data sampling process so that reinforcement learning can be effectively performed. We evaluate the proposed method on various image classification benchmarks, including CIFAR-10, CIFAR-100, Caltech-101, SVHN, and ImageNet, where the proposed method significantly improves previous approaches. © The Author(s) 2024.
- Files in This Item
-
Go to Link
- Appears in
Collections - COLLEGE OF ENGINEERING SCIENCES > SCHOOL OF ELECTRICAL ENGINEERING > 1. Journal Articles

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.