Rate Adaptation with Q-Learning in CSMA/CA Wireless Networks
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Cho, Soohyun | - |
dc.date.available | 2021-03-17T06:49:25Z | - |
dc.date.created | 2021-02-26 | - |
dc.date.issued | 2020-10 | - |
dc.identifier.issn | 1976-913X | - |
dc.identifier.uri | https://scholarworks.bwise.kr/hongik/handle/2020.sw.hongik/11516 | - |
dc.description.abstract | In this study, we propose a reinforcement learning agent to control the data transmission rates of nodes in carrier sensing multiple access with collision avoidance (CSMA/CA)-based wireless networks. We design a reinforcement learning (RL) agent, based on Q-learning. The agent learns the environment using the timeout events of packets, which are locally available in data sending nodes. The agent selects actions to control the data transmission rates of nodes that adjust the modulation and coding scheme (MCS) levels of the data packets to utilize the available bandwidth in dynamically changing channel conditions effectively. We use the ns3-gym framework to simulate RL and investigate the effects of the parameters of Q-learning on the performance of the RL agent. The simulation results indicate that the proposed RL agent adequately adjusts the MCS levels according to the changes in the network, and achieves a high throughput comparable to those of the existing data transmission rate adaptation schemes such as Minstrel. | - |
dc.publisher | KOREA INFORMATION PROCESSING SOC | - |
dc.title | Rate Adaptation with Q-Learning in CSMA/CA Wireless Networks | - |
dc.type | Article | - |
dc.contributor.affiliatedAuthor | Cho, Soohyun | - |
dc.identifier.doi | 10.3745/JIPS.03.0148 | - |
dc.identifier.scopusid | 2-s2.0-85099177604 | - |
dc.identifier.wosid | 000587732100005 | - |
dc.identifier.bibliographicCitation | JOURNAL OF INFORMATION PROCESSING SYSTEMS, v.16, no.5, pp.1048 - 1063 | - |
dc.relation.isPartOf | JOURNAL OF INFORMATION PROCESSING SYSTEMS | - |
dc.citation.title | JOURNAL OF INFORMATION PROCESSING SYSTEMS | - |
dc.citation.volume | 16 | - |
dc.citation.number | 5 | - |
dc.citation.startPage | 1048 | - |
dc.citation.endPage | 1063 | - |
dc.type.rims | ART | - |
dc.type.docType | Article | - |
dc.identifier.kciid | ART002642751 | - |
dc.description.journalClass | 1 | - |
dc.description.journalRegisteredClass | scopus | - |
dc.description.journalRegisteredClass | kci | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Information Systems | - |
dc.subject.keywordAuthor | CSMA/CA | - |
dc.subject.keywordAuthor | ns-3 | - |
dc.subject.keywordAuthor | ns3-gym | - |
dc.subject.keywordAuthor | Q-Learning | - |
dc.subject.keywordAuthor | Reinforcement Learning | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
94, Wausan-ro, Mapo-gu, Seoul, 04066, Korea02-320-1314
COPYRIGHT 2020 HONGIK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.