Dynamic Multichannel Access via Multi-agent Reinforcement Learning: Throughput and Fairness Guarantees
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Sohaib, Muhammad | - |
dc.contributor.author | Jeong, Jongjin | - |
dc.contributor.author | Jeon, Sang-Woon | - |
dc.date.accessioned | 2023-04-03T10:03:27Z | - |
dc.date.available | 2023-04-03T10:03:27Z | - |
dc.date.issued | 2022-06 | - |
dc.identifier.issn | 1536-1276 | - |
dc.identifier.issn | 1558-2248 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/erica/handle/2021.sw.erica/111657 | - |
dc.description.abstract | A multichannel random access system is considered in which each user accesses a single channel among multiple orthogonal channels to communicate with an access point (AP). Users arrive to the system at random and be activated for a certain period of time slots and then disappear from the system. Under such dynamic network environment, we propose a distributed multichannel access protocol based on multi-agent reinforcement learning (RL) to improve both throughput and fairness between users. Unlike the previous approaches adjusting channel access probabilities at each time slot, the proposed RL algorithm deterministically selects a set of channel access policies for several consecutive time slots. To effectively reduce the complexity of the proposed RL algorithm, we adopt a branching dueling Q-network architecture and propose a training methodology for producing proper Q-values under time-varying user sets. Numerical results demonstrate that the proposed scheme significantly improve both throughput and fairness. | - |
dc.format.extent | 15 | - |
dc.language | 영어 | - |
dc.language.iso | ENG | - |
dc.publisher | IEEE | - |
dc.title | Dynamic Multichannel Access via Multi-agent Reinforcement Learning: Throughput and Fairness Guarantees | - |
dc.type | Article | - |
dc.publisher.location | 미국 | - |
dc.identifier.doi | 10.1109/TWC.2021.3126112 | - |
dc.identifier.scopusid | 2-s2.0-85120064345 | - |
dc.identifier.wosid | 000809406400034 | - |
dc.identifier.bibliographicCitation | IEEE Transactions on Wireless Communications , v.21, no.6, pp 3994 - 4008 | - |
dc.citation.title | IEEE Transactions on Wireless Communications | - |
dc.citation.volume | 21 | - |
dc.citation.number | 6 | - |
dc.citation.startPage | 3994 | - |
dc.citation.endPage | 4008 | - |
dc.type.docType | Article | - |
dc.description.isOpenAccess | Y | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Engineering | - |
dc.relation.journalResearchArea | Telecommunications | - |
dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
dc.relation.journalWebOfScienceCategory | Telecommunications | - |
dc.subject.keywordPlus | ALLOCATION | - |
dc.subject.keywordPlus | PROTOCOLS | - |
dc.subject.keywordAuthor | deep learning | - |
dc.subject.keywordAuthor | fairness | - |
dc.subject.keywordAuthor | random access | - |
dc.subject.keywordAuthor | Reinforcement learning | - |
dc.subject.keywordAuthor | resource allocation | - |
dc.identifier.url | https://ieeexplore.ieee.org/document/9500945 | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
55 Hanyangdeahak-ro, Sangnok-gu, Ansan, Gyeonggi-do, 15588, Korea+82-31-400-4269 sweetbrain@hanyang.ac.kr
COPYRIGHT © 2021 HANYANG UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.