Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Com-DDPG: Task Offloading Based on Multiagent Reinforcement Learning for Information-Communication-Enhanced Mobile Edge Computing in the Internet of Vehicles

Authors
Gao, HonghaoWang, XuejieWei, WeiAl-Dulaimi, AnwerXu, Yueshen
Issue Date
Jan-2024
Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
Keywords
Task analysis; Servers; Reinforcement learning; Mobile handsets; Cloud computing; Performance evaluation; Energy consumption; Mobile edge computing; multiagent reinforcement learning; offloading strategy; wireless communication; internet of vehicles
Citation
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, v.73, no.1, pp 348 - 361
Pages
14
Journal Title
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY
Volume
73
Number
1
Start Page
348
End Page
361
URI
https://scholarworks.bwise.kr/gachon/handle/2020.sw.gachon/91219
DOI
10.1109/TVT.2023.3309321
ISSN
0018-9545
1939-9359
Abstract
The emergence of the Internet of Vehicles (IoV) introduces challenges regarding computation-intensive and time-sensitive related services for data processing and communication. Limited resource availability increases the processing latency and may cause application interruption due to the mobility of vehicles. To address the real-time requirements of users and tasks, mobile edge computing (MEC), in which data are processed at the network edge, has been proposed to collaborate with the cloud to provide better performance. However, the offloading strategies proposed previously have some shortcomings in addressing issues such as task dependency and resource competition. In this article, we propose a novel offloading strategy for MEC, Com-DDPG, in which multiagent reinforcement learning is used to enhance the offloading performance. Within the IoV transmission radius, multiple agents work together to learn the changes in the environment, such as the number of mobile devices and the queue of tasks, and take appropriate action in the form of a strategy for offloading to an edge server. First, we discuss models of task dependency, task priority, and resource consumption from the perspective of server clusters and multiple dependencies among tasks. In the proposed method, the communication behavior among multiple agents is formulated; then, the policy determined through reinforcement learning is executed as an offloading strategy to obtain the corresponding results. Second, to enhance the communication of information among multiple agents, a long short-term memory (LSTM) network is employed as an internal state predictor to provide a more complete environmental state, and a bidirectional recurrent neural network (BRNN) is used to learn and enhance the features obtained from the agents' communication. Finally, experiments carried out based on the Alibaba Cluster Dataset are presented. The results show that our method is superior to baseline methods in terms of energy consumption, load status and latency.
Files in This Item
There are no files associated with this item.
Appears in
Collections
ETC > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Altmetrics

Total Views & Downloads

BROWSE