Com-DDPG: Task Offloading Based on Multiagent Reinforcement Learning for Information-Communication-Enhanced Mobile Edge Computing in the Internet of Vehicles
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Gao, Honghao | - |
dc.contributor.author | Wang, Xuejie | - |
dc.contributor.author | Wei, Wei | - |
dc.contributor.author | Al-Dulaimi, Anwer | - |
dc.contributor.author | Xu, Yueshen | - |
dc.date.accessioned | 2024-05-18T11:30:20Z | - |
dc.date.available | 2024-05-18T11:30:20Z | - |
dc.date.issued | 2024-01 | - |
dc.identifier.issn | 0018-9545 | - |
dc.identifier.issn | 1939-9359 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/gachon/handle/2020.sw.gachon/91219 | - |
dc.description.abstract | The emergence of the Internet of Vehicles (IoV) introduces challenges regarding computation-intensive and time-sensitive related services for data processing and communication. Limited resource availability increases the processing latency and may cause application interruption due to the mobility of vehicles. To address the real-time requirements of users and tasks, mobile edge computing (MEC), in which data are processed at the network edge, has been proposed to collaborate with the cloud to provide better performance. However, the offloading strategies proposed previously have some shortcomings in addressing issues such as task dependency and resource competition. In this article, we propose a novel offloading strategy for MEC, Com-DDPG, in which multiagent reinforcement learning is used to enhance the offloading performance. Within the IoV transmission radius, multiple agents work together to learn the changes in the environment, such as the number of mobile devices and the queue of tasks, and take appropriate action in the form of a strategy for offloading to an edge server. First, we discuss models of task dependency, task priority, and resource consumption from the perspective of server clusters and multiple dependencies among tasks. In the proposed method, the communication behavior among multiple agents is formulated; then, the policy determined through reinforcement learning is executed as an offloading strategy to obtain the corresponding results. Second, to enhance the communication of information among multiple agents, a long short-term memory (LSTM) network is employed as an internal state predictor to provide a more complete environmental state, and a bidirectional recurrent neural network (BRNN) is used to learn and enhance the features obtained from the agents' communication. Finally, experiments carried out based on the Alibaba Cluster Dataset are presented. The results show that our method is superior to baseline methods in terms of energy consumption, load status and latency. | - |
dc.format.extent | 14 | - |
dc.language | 영어 | - |
dc.language.iso | ENG | - |
dc.publisher | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC | - |
dc.title | Com-DDPG: Task Offloading Based on Multiagent Reinforcement Learning for Information-Communication-Enhanced Mobile Edge Computing in the Internet of Vehicles | - |
dc.type | Article | - |
dc.identifier.wosid | 001166813500040 | - |
dc.identifier.doi | 10.1109/TVT.2023.3309321 | - |
dc.identifier.bibliographicCitation | IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, v.73, no.1, pp 348 - 361 | - |
dc.description.isOpenAccess | N | - |
dc.identifier.scopusid | 2-s2.0-85169702243 | - |
dc.citation.endPage | 361 | - |
dc.citation.startPage | 348 | - |
dc.citation.title | IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY | - |
dc.citation.volume | 73 | - |
dc.citation.number | 1 | - |
dc.type.docType | Article | - |
dc.publisher.location | 미국 | - |
dc.subject.keywordAuthor | Task analysis | - |
dc.subject.keywordAuthor | Servers | - |
dc.subject.keywordAuthor | Reinforcement learning | - |
dc.subject.keywordAuthor | Mobile handsets | - |
dc.subject.keywordAuthor | Cloud computing | - |
dc.subject.keywordAuthor | Performance evaluation | - |
dc.subject.keywordAuthor | Energy consumption | - |
dc.subject.keywordAuthor | Mobile edge computing | - |
dc.subject.keywordAuthor | multiagent reinforcement learning | - |
dc.subject.keywordAuthor | offloading strategy | - |
dc.subject.keywordAuthor | wireless communication | - |
dc.subject.keywordAuthor | internet of vehicles | - |
dc.subject.keywordPlus | RESOURCE-ALLOCATION | - |
dc.subject.keywordPlus | DECISION-MAKING | - |
dc.subject.keywordPlus | MANAGEMENT | - |
dc.relation.journalResearchArea | Engineering | - |
dc.relation.journalResearchArea | Telecommunications | - |
dc.relation.journalResearchArea | Transportation | - |
dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
dc.relation.journalWebOfScienceCategory | Telecommunications | - |
dc.relation.journalWebOfScienceCategory | Transportation Science & Technology | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
1342, Seongnam-daero, Sujeong-gu, Seongnam-si, Gyeonggi-do, Republic of Korea(13120)031-750-5114
COPYRIGHT 2020 Gachon University All Rights Reserved.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.