Feature Selection Method Using Multi-Agent Reinforcement Learning Based on Guide Agents
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Kim, Minwoo | - |
dc.contributor.author | Bae, Jinhee | - |
dc.contributor.author | Wang, Bohyun | - |
dc.contributor.author | Ko, Hansol | - |
dc.contributor.author | Lim, Joon S. | - |
dc.date.accessioned | 2023-01-26T00:40:13Z | - |
dc.date.available | 2023-01-26T00:40:13Z | - |
dc.date.created | 2023-01-26 | - |
dc.date.issued | 2023-01 | - |
dc.identifier.issn | 1424-8220 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/gachon/handle/2020.sw.gachon/86771 | - |
dc.description.abstract | In this study, we propose a method to automatically find features from a dataset that are effective for classification or prediction, using a new method called multi-agent reinforcement learning and a guide agent. Each feature of the dataset has one of the main and guide agents, and these agents decide whether to select a feature. Main agents select the optimal features, and guide agents present the criteria for judging the main agents' actions. After obtaining the main and guide rewards for the features selected by the agents, the main agent that behaves differently from the guide agent updates their Q-values by calculating the learning reward delivered to the main agents. The behavior comparison helps the main agent decide whether its own behavior is correct, without using other algorithms. After performing this process for each episode, the features are finally selected. The feature selection method proposed in this study uses multiple agents, reducing the number of actions each agent can perform and finding optimal features effectively and quickly. Finally, comparative experimental results on multiple datasets show that the proposed method can select effective features for classification and increase classification accuracy. | - |
dc.language | 영어 | - |
dc.language.iso | en | - |
dc.publisher | MDPI | - |
dc.relation.isPartOf | SENSORS | - |
dc.title | Feature Selection Method Using Multi-Agent Reinforcement Learning Based on Guide Agents | - |
dc.type | Article | - |
dc.type.rims | ART | - |
dc.description.journalClass | 1 | - |
dc.identifier.wosid | 000909744500001 | - |
dc.identifier.doi | 10.3390/s23010098 | - |
dc.identifier.bibliographicCitation | SENSORS, v.23, no.1 | - |
dc.description.isOpenAccess | Y | - |
dc.identifier.scopusid | 2-s2.0-85145976590 | - |
dc.citation.title | SENSORS | - |
dc.citation.volume | 23 | - |
dc.citation.number | 1 | - |
dc.contributor.affiliatedAuthor | Kim, Minwoo | - |
dc.contributor.affiliatedAuthor | Wang, Bohyun | - |
dc.contributor.affiliatedAuthor | Ko, Hansol | - |
dc.contributor.affiliatedAuthor | Lim, Joon S. | - |
dc.type.docType | Article | - |
dc.subject.keywordAuthor | feature selection | - |
dc.subject.keywordAuthor | guide agents | - |
dc.subject.keywordAuthor | main agents | - |
dc.subject.keywordAuthor | multi-agent | - |
dc.subject.keywordAuthor | reinforcement learning (RL) | - |
dc.subject.keywordAuthor | rewards | - |
dc.subject.keywordPlus | ALGORITHM | - |
dc.relation.journalResearchArea | Chemistry | - |
dc.relation.journalResearchArea | Engineering | - |
dc.relation.journalResearchArea | Instruments & Instrumentation | - |
dc.relation.journalWebOfScienceCategory | Chemistry, Analytical | - |
dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
dc.relation.journalWebOfScienceCategory | Instruments & Instrumentation | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
1342, Seongnam-daero, Sujeong-gu, Seongnam-si, Gyeonggi-do, Republic of Korea(13120)031-750-5114
COPYRIGHT 2020 Gachon University All Rights Reserved.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.