Priority-Aware Actuation Update Scheme in Heterogeneous Industrial Networks
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Kyung, Yeunwoong | - |
dc.contributor.author | Sung, Jihoon | - |
dc.contributor.author | Ko, Haneul | - |
dc.contributor.author | Song, Taewon | - |
dc.contributor.author | Kim, Youngjun | - |
dc.date.accessioned | 2024-06-11T07:03:01Z | - |
dc.date.available | 2024-06-11T07:03:01Z | - |
dc.date.issued | 2024-01 | - |
dc.identifier.issn | 1424-8220 | - |
dc.identifier.issn | 1424-3210 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/sch/handle/2021.sw.sch/26011 | - |
dc.description.abstract | In heterogeneous wireless networked control systems (WNCSs), the age of information (AoI) of the actuation update and actuation update cost are important performance metrics. To reduce the monetary cost, the control system can wait for the availability of a WiFi network for the actuator and then conduct the update using a WiFi network in an opportunistic manner, but this leads to an increased AoI of the actuation update. In addition, since there are different AoI requirements according to the control priorities (i.e., robustness of AoI of the actuation update), these need to be considered when delivering the actuation update. To jointly consider the monetary cost and AoI with priority, this paper proposes a priority-aware actuation update scheme (PAUS) where the control system decides whether to deliver or delay the actuation update to the actuator. For the optimal decision, we formulate a Markov decision process model and derive the optimal policy based on Q-learning, which aims to maximize the average reward that implies the balance between the monetary cost and AoI with priority. Simulation results demonstrate that the PAUS outperforms the comparison schemes in terms of the average reward under various settings. | - |
dc.language | 영어 | - |
dc.language.iso | ENG | - |
dc.publisher | MDPI | - |
dc.title | Priority-Aware Actuation Update Scheme in Heterogeneous Industrial Networks | - |
dc.type | Article | - |
dc.publisher.location | 스위스 | - |
dc.identifier.doi | 10.3390/s24020357 | - |
dc.identifier.scopusid | 2-s2.0-85183228888 | - |
dc.identifier.wosid | 001151484000001 | - |
dc.identifier.bibliographicCitation | SENSORS, v.24, no.2 | - |
dc.citation.title | SENSORS | - |
dc.citation.volume | 24 | - |
dc.citation.number | 2 | - |
dc.type.docType | Article | - |
dc.description.isOpenAccess | Y | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Chemistry | - |
dc.relation.journalResearchArea | Engineering | - |
dc.relation.journalResearchArea | Instruments & Instrumentation | - |
dc.relation.journalWebOfScienceCategory | Chemistry, Analytical | - |
dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
dc.relation.journalWebOfScienceCategory | Instruments & Instrumentation | - |
dc.subject.keywordPlus | CONTROL-SYSTEMS | - |
dc.subject.keywordPlus | OPTIMIZATION | - |
dc.subject.keywordAuthor | actuation update | - |
dc.subject.keywordAuthor | age of information | - |
dc.subject.keywordAuthor | industrial networks | - |
dc.subject.keywordAuthor | wireless networked control systems | - |
dc.subject.keywordAuthor | Markov decision process | - |
dc.subject.keywordAuthor | Q-learning | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
(31538) 22, Soonchunhyang-ro, Asan-si, Chungcheongnam-do, Republic of Korea+82-41-530-1114
COPYRIGHT 2021 by SOONCHUNHYANG UNIVERSITY ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.