Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Deep-reinforcement-learning-based range-adaptive distributed power control for cellular-V2Xopen access

Authors
Yang, WooyeolJo, Han-Shin
Issue Date
Aug-2023
Publisher
한국통신학회
Keywords
C-V2X; Distributed congestion control; Deep reinforcement learning; Packet delivery ratio; Power control
Citation
ICT Express, v.9, no.4, pp.648 - 655
Indexed
SCIE
SCOPUS
KCI
Journal Title
ICT Express
Volume
9
Number
4
Start Page
648
End Page
655
URI
https://scholarworks.bwise.kr/hanyang/handle/2021.sw.hanyang/192178
DOI
10.1016/j.icte.2022.07.008
ISSN
2405-9595
Abstract
A distributed congestion control must be adaptable to varying target communication ranges as cellular V2X (C-V2X) is evolving to support flexible coverage suitable for various service scenarios. This study proposes range-adaptive distributed power control (Ra-DPC) based on deep reinforcement learning (DRL) with the Monte Carlo policy gradient algorithm. A key finding is that the agents learn Ra-DPC more effectively when the cumulative interference power of the subchannels is adopted as the state of the DRL model, rather than the channel busy ratio. The proposed Ra-DPC algorithm performs better in energy efficiency and packet delivery ratio than the existing technologies. & COPY; 2022 The Authors. Published by Elsevier B.V. on behalf of The Korean Institute of Communications and Information Sciences. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
Files in This Item
Go to Link
Appears in
Collections
서울 공과대학 > 서울 미래자동차공학과 > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Jo, Han-Shin photo

Jo, Han-Shin
COLLEGE OF ENGINEERING (DEPARTMENT OF AUTOMOTIVE ENGINEERING)
Read more

Altmetrics

Total Views & Downloads

BROWSE