Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Deep Reinforcement Learning-Based Task Offloading and Resource Allocation for Industrial IoT in MEC Federation Systemopen access

Authors
Do, Huong MaiTran, Tuan PhongYoo, Myungsik
Issue Date
Aug-2023
Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
Keywords
~MEC federation; IIoT; task offloading; resource allocation; Markov decision process; deep reinforcement learning
Citation
IEEE ACCESS, v.11, pp.83150 - 83170
Journal Title
IEEE ACCESS
Volume
11
Start Page
83150
End Page
83170
URI
http://scholarworks.bwise.kr/ssu/handle/2018.sw.ssu/44255
DOI
10.1109/ACCESS.2023.3302518
ISSN
2169-3536
Abstract
The rapid growth of the Internet of Things (IoT) has resulted in the development of intelligent industrial systems known as Industrial IoT (IIoT). These systems integrate smart devices, sensors, cameras, and 5G technologies to enable automated data gathering and analysis boost production efficiency and overcome scalability issues. However, IoT devices have limited computer power, memory, and battery capacities. To address these challenges, mobile edge computing (MEC) has been introduced to IIoT systems to reduce the computational burden on the devices. While the dedicated MEC paradigm limits optimal resource utilization and load balancing, the MEC federation can potentially overcome these drawbacks. However, previous studies have relied on idealized assumptions when developing optimal models, raising concerns about their practical applicability. In this study, we investigated the joint decision offloading and resource allocation problem for MEC federation in the IIoT. Specifically, an optimization model was constructed based on all real-world factors influencing system performance. To minimize the total energy delay cost, the original problem was transformed into a Markov decision process. Considering task generation dynamics and continuity, we addressed the Markov decision process using a deep reinforcement learning method. We propose a deep deterministic policy gradient algorithm with prioritized experience replay (DDPG-PER)-based resource allocation that can handle high-dimensional continuity of action and state spaces. The simulation results indicate that the proposed approach effectively minimizes the energy-delay costs associated with tasks.
Files in This Item
Go to Link
Appears in
Collections
ETC > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Yoo, Myung sik photo

Yoo, Myung sik
College of Information Technology (Department of Electronic Engineering)
Read more

Altmetrics

Total Views & Downloads

BROWSE