DRL-Based Backbone SDN Control Methods in UAV-Assisted Networks for Computational Resource Efficiency
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Song, Inseok | - |
dc.contributor.author | Tam, Prohim | - |
dc.contributor.author | Kang, Seungwoo | - |
dc.contributor.author | Ros, Seyha | - |
dc.contributor.author | Kim, Seokhoon | - |
dc.date.accessioned | 2023-12-14T06:01:46Z | - |
dc.date.available | 2023-12-14T06:01:46Z | - |
dc.date.issued | 2023-07 | - |
dc.identifier.issn | 2079-9292 | - |
dc.identifier.issn | 2079-9292 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/sch/handle/2021.sw.sch/25341 | - |
dc.description.abstract | The limited coverage extension of mobile edge computing (MEC) necessitates exploring cooperation with unmanned aerial vehicles (UAV) to leverage advanced features for future computation-intensive and mission-critical applications. Moreover, the workflow for task offloading in software-defined networking (SDN)-enabled 5G is significant to tackle in UAV-MEC networks. In this paper, deep reinforcement learning (DRL) SDN control methods for improving computing resources are proposed. DRL-based SDN controller, termed DRL-SDNC, allocates computational resources, bandwidth, and storage based on task requirements, upper-bound tolerable delays, and network conditions, using the UAV system architecture for task exchange between MECs. DRL-SDNC configures rule installation based on state observations and agent evaluation indicators, such as network congestion, user equipment computational capabilities, and energy efficiency. This paper also proposes the training deep network architecture for the DRL-SDNC, enabling interactive and autonomous policy enforcement. The agent learns from the UAV-MEC environment through experience gathering and updates its parameters using optimization methods. DRL-SDNC collaboratively adjusts hyperparameters and network architecture to enhance learning efficiency. Compared with baseline schemes, simulation results demonstrate the effectiveness of the proposed approach in optimizing resource efficiency and achieving satisfied quality of service for efficient utilization of computing and communication resources in UAV-assisted networking environments. | - |
dc.language | 영어 | - |
dc.language.iso | ENG | - |
dc.publisher | MDPI AG | - |
dc.title | DRL-Based Backbone SDN Control Methods in UAV-Assisted Networks for Computational Resource Efficiency | - |
dc.type | Article | - |
dc.publisher.location | 스위스 | - |
dc.identifier.doi | 10.3390/electronics12132984 | - |
dc.identifier.scopusid | 2-s2.0-85164808423 | - |
dc.identifier.wosid | 001031189600001 | - |
dc.identifier.bibliographicCitation | Electronics (Basel), v.12, no.13 | - |
dc.citation.title | Electronics (Basel) | - |
dc.citation.volume | 12 | - |
dc.citation.number | 13 | - |
dc.type.docType | Article | - |
dc.description.isOpenAccess | Y | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalResearchArea | Engineering | - |
dc.relation.journalResearchArea | Physics | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Information Systems | - |
dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
dc.relation.journalWebOfScienceCategory | Physics, Applied | - |
dc.subject.keywordPlus | MOBILE | - |
dc.subject.keywordPlus | ALLOCATION | - |
dc.subject.keywordPlus | 5G | - |
dc.subject.keywordAuthor | computational resource efficiency | - |
dc.subject.keywordAuthor | deep reinforcement learning | - |
dc.subject.keywordAuthor | mobile edge computing | - |
dc.subject.keywordAuthor | software-defined networking | - |
dc.subject.keywordAuthor | unmanned aerial vehicles | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
(31538) 22, Soonchunhyang-ro, Asan-si, Chungcheongnam-do, Republic of Korea+82-41-530-1114
COPYRIGHT 2021 by SOONCHUNHYANG UNIVERSITY ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.