A Deep Reinforcement Learning-Based QoS Routing Protocol Exploiting Cross-Layer Design in Cognitive Radio Mobile Ad Hoc Networks
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Tran, T. | - |
dc.contributor.author | Nguyen, T. | - |
dc.contributor.author | Shim, K. | - |
dc.contributor.author | da, Costa D.B. | - |
dc.contributor.author | An, B. | - |
dc.date.accessioned | 2022-08-30T07:40:26Z | - |
dc.date.available | 2022-08-30T07:40:26Z | - |
dc.date.issued | 2022-12-01 | - |
dc.identifier.issn | 0018-9545 | - |
dc.identifier.issn | 1939-9359 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/hongik/handle/2020.sw.hongik/30285 | - |
dc.description.abstract | In this paper, we propose a novel deep reinforcement learning-based quality-of-service (QoS) routing protocol, namely DRQR, exploiting cross-layer design to establish efficient QoS (EQS) routes in cognitive radio mobile ad hoc networks. An EQS route is a route with minimum end-to-end (E2E) queuing delay subject to QoS constraints such as link stability, residual energy, number of hops and avoiding licensed channels of primary users. Particularly, we propose an NP-complete optimization problem which has a feasible solution as an EQS route. To tackle this problem, we design a new deep reinforcement learning model which supports the DRQR protocol to establish EQS routes in real time by offline training instead of online training like most of literature studies. Moreover, the DRQR protocol guarantees to have high system performance. A mathematical analysis of the E2E queuing delay with random waypoint mobility model also provides to verify simulation results. Numerical results show that the DRQR protocol outperforms state-of-the-art routing protocols in terms of routing delay, queuing delay, control overhead, PDR and energy consumption. IEEE | - |
dc.format.extent | 16 | - |
dc.language | 영어 | - |
dc.language.iso | ENG | - |
dc.publisher | Institute of Electrical and Electronics Engineers Inc. | - |
dc.title | A Deep Reinforcement Learning-Based QoS Routing Protocol Exploiting Cross-Layer Design in Cognitive Radio Mobile Ad Hoc Networks | - |
dc.type | Article | - |
dc.publisher.location | 미국 | - |
dc.identifier.doi | 10.1109/TVT.2022.3196046 | - |
dc.identifier.scopusid | 2-s2.0-85135739468 | - |
dc.identifier.wosid | 000908826000055 | - |
dc.identifier.bibliographicCitation | IEEE Transactions on Vehicular Technology, v.71, no.12, pp 1 - 16 | - |
dc.citation.title | IEEE Transactions on Vehicular Technology | - |
dc.citation.volume | 71 | - |
dc.citation.number | 12 | - |
dc.citation.startPage | 1 | - |
dc.citation.endPage | 16 | - |
dc.type.docType | Article | - |
dc.description.isOpenAccess | N | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Engineering | - |
dc.relation.journalResearchArea | Telecommunications | - |
dc.relation.journalResearchArea | Transportation | - |
dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
dc.relation.journalWebOfScienceCategory | Telecommunications | - |
dc.relation.journalWebOfScienceCategory | Transportation Science & Technology | - |
dc.subject.keywordAuthor | Ad hoc networks | - |
dc.subject.keywordAuthor | cognitive mobile ad hoc networks | - |
dc.subject.keywordAuthor | cross-layer design | - |
dc.subject.keywordAuthor | Deep reinforcement learning | - |
dc.subject.keywordAuthor | Delays | - |
dc.subject.keywordAuthor | Mobile computing | - |
dc.subject.keywordAuthor | Q-learning | - |
dc.subject.keywordAuthor | QoS routing | - |
dc.subject.keywordAuthor | Quality of service | - |
dc.subject.keywordAuthor | Routing | - |
dc.subject.keywordAuthor | Routing protocols | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
94, Wausan-ro, Mapo-gu, Seoul, 04066, Korea02-320-1314
COPYRIGHT 2020 HONGIK UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.