Deep Reinforcement Learning Paradigm for Performance Optimization of Channel Observation-Based MAC Protocols in Dense WLANS
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Ali, Rashid | - |
dc.contributor.author | Shahin, Nurullah | - |
dc.contributor.author | Bin Zikria, Yousaf | - |
dc.contributor.author | Kim, Byung-Seo | - |
dc.contributor.author | Kim, Sung Won | - |
dc.date.available | 2020-07-10T04:14:11Z | - |
dc.date.created | 2020-07-06 | - |
dc.date.issued | 2019 | - |
dc.identifier.issn | 2169-3536 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/hongik/handle/2020.sw.hongik/2750 | - |
dc.description.abstract | The potential applications of deep learning to the media access control (MAC) layer of wireless local area networks (WLANs) have already been progressively acknowledged due to their novel features for future communications. Their new features challenge conventional communications theories with more sophisticated artificial intelligence-based theories. Deep reinforcement learning (DRL) is one DL technique that is motivated by the behaviorist sensibility and control philosophy, where a learner can achieve an objective by interacting with the environment. Next-generation dense WLANs like the IEEE 802.11ax high-efficiency WLAN are expected to confront ultra-dense diverse user environments and radically new applications. To satisfy the diverse requirements of such dense WLANs, it is anticipated that prospective WLANs will freely access the best channel resources with the assistance of self-scrutinized wireless channel condition inference. Channel collision handling is one of the major obstacles for future WLANs due to the increase in density of the users. Therefore, in this paper, we propose DRL as an intelligent paradigm for MAC layer resource allocation in dense WLANs. One of the DRL models, Q-learning (QL), is used to optimize the performance of channel observation-based MAC protocols in dense WLANs. An intelligent QL-based resource allocation (iQRA) mechanism is proposed for MAC layer channel access in dense WLANs. The performance of the proposed mechanism is evaluated through extensive simulations. Simulation results indicate that the proposed intelligent paradigm learns diverse WLAN environments and optimizes performance, compared to conventional non-intelligent MAC protocols. The performance of the proposed iQRA mechanism is evaluated in diverse WLANs with throughput, channel access delay, and fairness as performance metrics. | - |
dc.language | 영어 | - |
dc.language.iso | en | - |
dc.publisher | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC | - |
dc.subject | NEURAL-NETWORKS | - |
dc.title | Deep Reinforcement Learning Paradigm for Performance Optimization of Channel Observation-Based MAC Protocols in Dense WLANS | - |
dc.type | Article | - |
dc.contributor.affiliatedAuthor | Kim, Byung-Seo | - |
dc.identifier.doi | 10.1109/ACCESS.2018.2886216 | - |
dc.identifier.scopusid | 2-s2.0-85058878741 | - |
dc.identifier.wosid | 000456183800001 | - |
dc.identifier.bibliographicCitation | IEEE ACCESS, v.7, pp.3500 - 3511 | - |
dc.relation.isPartOf | IEEE ACCESS | - |
dc.citation.title | IEEE ACCESS | - |
dc.citation.volume | 7 | - |
dc.citation.startPage | 3500 | - |
dc.citation.endPage | 3511 | - |
dc.type.rims | ART | - |
dc.type.docType | Article | - |
dc.description.journalClass | 1 | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalResearchArea | Engineering | - |
dc.relation.journalResearchArea | Telecommunications | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Information Systems | - |
dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
dc.relation.journalWebOfScienceCategory | Telecommunications | - |
dc.subject.keywordPlus | NEURAL-NETWORKS | - |
dc.subject.keywordAuthor | IEEE 802.11ax | - |
dc.subject.keywordAuthor | dense WLANs | - |
dc.subject.keywordAuthor | HEW | - |
dc.subject.keywordAuthor | reinforcement learning | - |
dc.subject.keywordAuthor | Q-learning | - |
dc.subject.keywordAuthor | MAC protocols | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
94, Wausan-ro, Mapo-gu, Seoul, 04066, Korea02-320-1314
COPYRIGHT 2020 HONGIK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.