Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

DRL-OS: A Deep Reinforcement Learning-Based Offloading Scheduler in Mobile Edge Computing

Full metadata record
DC Field Value Language
dc.contributor.authorLim, Ducsun-
dc.contributor.authorLee, Wooyeob-
dc.contributor.authorKim, Won-Tae-
dc.contributor.authorJoe, Inwhee-
dc.date.accessioned2023-01-25T09:11:54Z-
dc.date.available2023-01-25T09:11:54Z-
dc.date.created2023-01-05-
dc.date.issued2022-12-
dc.identifier.issn1424-8220-
dc.identifier.urihttps://scholarworks.bwise.kr/hanyang/handle/2021.sw.hanyang/182138-
dc.description.abstractHardware bottlenecks can throttle smart device (SD) performance when executing computation-intensive and delay-sensitive applications. Hence, task offloading can be used to transfer computation-intensive tasks to an external server or processor in Mobile Edge Computing. However, in this approach, the offloaded task can be useless when a process is significantly delayed or a deadline has expired. Due to the uncertain task processing via offloading, it is challenging for each SD to determine its offloading decision (whether to local or remote and drop). This study proposes a deep-reinforcement-learning-based offloading scheduler (DRL-OS) that considers the energy balance in selecting the method for performing a task, such as local computing, offloading, or dropping. The proposed DRL-OS is based on the double dueling deep Q-network (D3QN) and selects an appropriate action by learning the task size, deadline, queue, and residual battery charge. The average battery level, drop rate, and average latency of the DRL-OS were measured in simulations to analyze the scheduler performance. The DRL-OS exhibits a lower average battery level (up to 54%) and lower drop rate (up to 42.5%) than existing schemes. The scheduler also achieves a lower average latency of 0.01 to >0.25 s, despite subtle case-wise differences in the average latency.-
dc.language영어-
dc.language.isoen-
dc.publisherMDPI-
dc.titleDRL-OS: A Deep Reinforcement Learning-Based Offloading Scheduler in Mobile Edge Computing-
dc.typeArticle-
dc.contributor.affiliatedAuthorJoe, Inwhee-
dc.identifier.doi10.3390/s22239212-
dc.identifier.scopusid2-s2.0-85143758694-
dc.identifier.wosid000897357200001-
dc.identifier.bibliographicCitationSENSORS, v.22, no.23, pp.1 - 26-
dc.relation.isPartOfSENSORS-
dc.citation.titleSENSORS-
dc.citation.volume22-
dc.citation.number23-
dc.citation.startPage1-
dc.citation.endPage26-
dc.type.rimsART-
dc.type.docTypeArticle-
dc.description.journalClass1-
dc.description.isOpenAccessY-
dc.description.journalRegisteredClassscie-
dc.description.journalRegisteredClassscopus-
dc.relation.journalResearchAreaChemistry-
dc.relation.journalResearchAreaEngineering-
dc.relation.journalResearchAreaInstruments & Instrumentation-
dc.relation.journalWebOfScienceCategoryChemistry, Analytical-
dc.relation.journalWebOfScienceCategoryEngineering, Electrical & Electronic-
dc.relation.journalWebOfScienceCategoryInstruments & Instrumentation-
dc.subject.keywordPlusRESOURCE-ALLOCATION-
dc.subject.keywordPlusCLOUD-
dc.subject.keywordPlusDELAY-
dc.subject.keywordAuthorcomputation offloading-
dc.subject.keywordAuthordouble dueling deep Q-network-
dc.subject.keywordAuthorenergy consumption-
dc.subject.keywordAuthormobile edge computing (MEC)-
dc.subject.keywordAuthorresource management-
dc.subject.keywordAuthorreinforcement learning-
dc.identifier.urlhttps://www.mdpi.com/1424-8220/22/23/9212-
Files in This Item
Appears in
Collections
서울 공과대학 > 서울 컴퓨터소프트웨어학부 > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Joe, Inwhee photo

Joe, Inwhee
COLLEGE OF ENGINEERING (SCHOOL OF COMPUTER SCIENCE)
Read more

Altmetrics

Total Views & Downloads

BROWSE