Multi-agent deep reinforcement learning for cross-layer scheduling in mobile ad-hoc networks
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Zheng, Xinxing | - |
dc.contributor.author | Zhao, Yu | - |
dc.contributor.author | Lee, Joohyun | - |
dc.contributor.author | Chen, Wei | - |
dc.date.accessioned | 2023-11-14T01:36:47Z | - |
dc.date.available | 2023-11-14T01:36:47Z | - |
dc.date.issued | 2023-08 | - |
dc.identifier.issn | 1673-5447 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/erica/handle/2021.sw.erica/115515 | - |
dc.description.abstract | Due to the fading characteristics of wireless channels and the burstiness of data traffic, how to deal with congestion in Ad-hoc networks with effective algorithms is still open and challenging. In this paper, we focus on enabling congestion control to minimize network transmission delays through flexible power control. To effectively solve the congestion problem, we propose a distributed cross-layer scheduling algorithm, which is empowered by graph-based multi-agent deep reinforcement learning. The transmit power is adaptively adjusted in real-time by our algorithm based only on local information (i.e., channel state information and queue length) and local communication (i.e., information exchanged with neighbors). Moreover, the training complexity of the algorithm is low due to the regional cooperation based on the graph attention network. In the evaluation, we show that our algorithm can reduce the transmission delay of data flow under severe signal interference and drastically changing channel states, and demonstrate the adaptability and stability in different topologies. The method is general and can be extended to various types of topologies. | - |
dc.format.extent | 11 | - |
dc.language | 영어 | - |
dc.language.iso | ENG | - |
dc.publisher | China Institute of Communication | - |
dc.title | Multi-agent deep reinforcement learning for cross-layer scheduling in mobile ad-hoc networks | - |
dc.type | Article | - |
dc.publisher.location | 중국 | - |
dc.identifier.doi | 10.23919/JCC.fa.2022-0496.202308 | - |
dc.identifier.scopusid | 2-s2.0-85171585464 | - |
dc.identifier.wosid | 001060514100008 | - |
dc.identifier.bibliographicCitation | China Communications, v.20, no.8, pp 78 - 88 | - |
dc.citation.title | China Communications | - |
dc.citation.volume | 20 | - |
dc.citation.number | 8 | - |
dc.citation.startPage | 78 | - |
dc.citation.endPage | 88 | - |
dc.type.docType | Article | - |
dc.description.isOpenAccess | N | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Telecommunications | - |
dc.relation.journalWebOfScienceCategory | Telecommunications | - |
dc.subject.keywordPlus | CONGESTION CONTROL | - |
dc.subject.keywordPlus | DESIGN | - |
dc.subject.keywordAuthor | Ad-hoc network | - |
dc.subject.keywordAuthor | cross-layer scheduling | - |
dc.subject.keywordAuthor | multi agent deep reinforcement learning | - |
dc.subject.keywordAuthor | interference elimination | - |
dc.subject.keywordAuthor | power control | - |
dc.subject.keywordAuthor | queue scheduling | - |
dc.subject.keywordAuthor | actorcritic methods | - |
dc.subject.keywordAuthor | markov decision process | - |
dc.identifier.url | https://ieeexplore.ieee.org/document/10238405 | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
55 Hanyangdeahak-ro, Sangnok-gu, Ansan, Gyeonggi-do, 15588, Korea+82-31-400-4269 sweetbrain@hanyang.ac.kr
COPYRIGHT © 2021 HANYANG UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.