An optimal resource assignment and mode selection for vehicular communication using proximal on-policy scheme
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Budhiraja, Ishan | - |
dc.contributor.author | Alphy, Anna | - |
dc.contributor.author | Pandey, Pawan | - |
dc.contributor.author | Garg, Sahil | - |
dc.contributor.author | Choi, Bong Jun | - |
dc.contributor.author | Hassan, Mohammad Mehedi | - |
dc.date.accessioned | 2024-08-01T06:30:47Z | - |
dc.date.available | 2024-08-01T06:30:47Z | - |
dc.date.issued | 2024-11 | - |
dc.identifier.issn | 1110-0168 | - |
dc.identifier.issn | 2090-2670 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/ssu/handle/2018.sw.ssu/49913 | - |
dc.description.abstract | Vehicle-to-everything (V2X) communication is essential in 5G and upcoming networks as it enables seamless interaction between vehicles and infrastructure, ensuring the reliable transmission of critical and time- sensitive data. Challenges like unstable communication in highly mobile vehicular networks, limited channel state information, high transmission overhead, and significant communication costs hinder vehicle-to-vehicle (V2V) communication. To tackle these issues, a unified approach utilizing distributed deep reinforcement learning is proposed to enhance the overall network performance while meeting the quality of service (QoS), latency, and rate requirements. Recognizing the complexity of this NP-hard, non-convex problem, a machine learning framework based on the Markov decision process (MDP) is adopted for a robust strategy. This framework facilitates the formulation of a reward function and the selection of optimal actions with certainty. Furthermore, a spectrum-based allocation framework employing multi-agent deep reinforcement learning (MADRL) is confidently introduced. The deep deterministic policy gradient (DDPG) within this framework enables the exchange of historical data globally during the primary learning phase, effectively removing the need for signal interaction and manual intervention in optimizing system efficiency. The data transmission policy follows an augmented online policy scheme, known as the proximal online policy scheme (POPS), which confidently reduces the computational complexity during the learning process. The complexity is marginally adjusted using the clipping substitute technique with assurance in the learning phase. Simulation results validate that the proposed method outperforms existing decentralized systems in achieving a higher average data transmission rate and ensuring quality of service (QoS) satisfaction confidently. | - |
dc.format.extent | 12 | - |
dc.language | 영어 | - |
dc.language.iso | ENG | - |
dc.publisher | ELSEVIER | - |
dc.title | An optimal resource assignment and mode selection for vehicular communication using proximal on-policy scheme | - |
dc.type | Article | - |
dc.identifier.doi | 10.1016/j.aej.2024.07.010 | - |
dc.identifier.bibliographicCitation | ALEXANDRIA ENGINEERING JOURNAL, v.107, pp 268 - 279 | - |
dc.identifier.wosid | 001274337500001 | - |
dc.identifier.scopusid | 2-s2.0-85198999436 | - |
dc.citation.endPage | 279 | - |
dc.citation.startPage | 268 | - |
dc.citation.title | ALEXANDRIA ENGINEERING JOURNAL | - |
dc.citation.volume | 107 | - |
dc.identifier.url | https://www.sciencedirect.com/science/article/pii/S1110016824007312?via%3Dihub | - |
dc.publisher.location | 네델란드 | - |
dc.type.docType | Article | - |
dc.description.isOpenAccess | Y | - |
dc.subject.keywordAuthor | DRL | - |
dc.subject.keywordAuthor | DDPG | - |
dc.subject.keywordAuthor | MDP | - |
dc.subject.keywordAuthor | POPS | - |
dc.subject.keywordAuthor | And V2X | - |
dc.subject.keywordPlus | POWER ALLOCATION | - |
dc.subject.keywordPlus | REINFORCEMENT | - |
dc.relation.journalResearchArea | Engineering | - |
dc.relation.journalWebOfScienceCategory | Engineering, Multidisciplinary | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
Soongsil University Library 369 Sangdo-Ro, Dongjak-Gu, Seoul, Korea (06978)02-820-0733
COPYRIGHT ⓒ SOONGSIL UNIVERSITY, ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.