Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Collision Observation-Based Optimization of Low-Power and Lossy IoT Network Using Reinforcement Learning

Authors
Musaddiq, ArsIanAli, RashidChoi, Jin-GhooKim, Byung-SeoKim, Sung-Won
Issue Date
2021
Publisher
TECH SCIENCE PRESS
Keywords
Internet of Things; MAC protocols; Q-learning; Reinforcement learning; RPL
Citation
CMC-COMPUTERS MATERIALS & CONTINUA, v.67, no.1, pp 799 - 814
Pages
16
Journal Title
CMC-COMPUTERS MATERIALS & CONTINUA
Volume
67
Number
1
Start Page
799
End Page
814
URI
https://scholarworks.bwise.kr/hongik/handle/2020.sw.hongik/31958
DOI
10.32604/cmc.2021.014751
ISSN
1546-2218
1546-2226
Abstract
The Internet of Things (IoT) has numerous applications in every domain, e.g., smart cities to provide intelligent services to sustainable cities. The next-generation of IoT networks is expected to be densely deployed in a resource-constrained and lossy environment. The densely deployed nodes producing radically heterogeneous traffic pattern causes congestion and collision in the network. At the medium access control (MAC) layer, mitigating channel collision is still one of the main challenges of future IoT networks. Similarly, the standardized network layer uses a ranking mechanism based on hop-counts and expected transmission counts (ETX), which often does not adapt to the dynamic and lossy environment and impact performance. The ranking mechanism also requires large control overheads to update rank information. The resource-constrained IoT devices operating in a low-power and lossy network (LLN) environment need an efficient solution to handle these problems. Reinforcement learning (RL) algorithms like Q-learning are recently utilized to solve learning problems in LLNs devices like sensors. Thus, in this paper, an RL-based optimization of dense LLN IoT devices with heavy heterogeneous traffic is devised. The proposed protocol learns the collision information from the MAC layer and makes an intelligent decision at the network layer. The proposed protocol also enhances the operation of the trickle timer algorithm. A Q-learning model is employed to adaptively learn the channel collision probability and network layer ranking states with accumulated reward function. Based on a simulation using Contiki 3.0 Cooja, the proposed intelligent scheme achieves a lower packet loss ratio, improves throughput, produces lower control overheads, and consumes less energy than other state-of-the-art mechanisms.
Files in This Item
There are no files associated with this item.
Appears in
Collections
Department of General Studies > Department of General Studies > 1. Journal Articles
Graduate School > Software and Communications Engineering > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Kim, Sung Won photo

Kim, Sung Won
Department of General Studies (Department of General Studies)
Read more

Altmetrics

Total Views & Downloads

BROWSE