A Deep Reinforcement Learning Based Approach for Energy-Efficient Channel Allocation in Satellite Internet of Things
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Zhao, Baokang | - |
dc.contributor.author | Liu, Jiahao | - |
dc.contributor.author | Wei, Ziling | - |
dc.contributor.author | You, Ilsun | - |
dc.date.accessioned | 2021-08-11T08:43:53Z | - |
dc.date.available | 2021-08-11T08:43:53Z | - |
dc.date.issued | 2020 | - |
dc.identifier.issn | 2169-3536 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/sch/handle/2021.sw.sch/3722 | - |
dc.description.abstract | Recently, Satellite Internet of Things (SIoT), a space network that consists of numerous Low Earth Orbit (LEO) satellites, is regarded as a promising technique since it is the only solution to provide 100% global coverage for the whole earth, without any additional terrestrial infrastructure supports. However, compared with Geostationary Earth Orbit (GEO) satellites, the LEO satellites always move very fast to cover an area within only 5-12 minutes per pass, bringing high dynamics to the network access. Furthermore, to reduce the cost, the power and spectrum channel resources of each LEO satellite are very limited, i.e., less than 10% of GEO. Therefore, to take fully advantage of the limited resource, it is very challenging to have an efficient resource allocation scheme for SIoT. Current resource allocation schemes for satellites are mostly designed for GEO, and these schemes do not consider many LEO specific concerns, including the constrained energy, the mobility characteristic, the dynamics of connections and transmissions etc. Towards this end, we proposed DeepCA, a novel reinforcement learning based approach for energy-efficient channel allocation in SIoT. In DeepCA, we firstly introduce a new sliding block scheme to facilitate the modeling of dynamic feature of the LEO satellite, and formulate the dynamic channel allocation problem in SIoT as a Markov decision process (MDP). We then propose a deep reinforcement learning algorithm for optimal channel allocation. To accelerate the learning process of DeepCA, we utilize the image form to represent the requests of users to reduce the input size, and carefully divide an action into multiple mini-actions to reduce the size of the action set. Extensive simulations show that our proposed DeepCA approach can save at least 67.86% energy consumption compared with traditional algorithms. | - |
dc.format.extent | 10 | - |
dc.language | 영어 | - |
dc.language.iso | ENG | - |
dc.publisher | Institute of Electrical and Electronics Engineers Inc. | - |
dc.title | A Deep Reinforcement Learning Based Approach for Energy-Efficient Channel Allocation in Satellite Internet of Things | - |
dc.type | Article | - |
dc.publisher.location | 미국 | - |
dc.identifier.doi | 10.1109/ACCESS.2020.2983437 | - |
dc.identifier.scopusid | 2-s2.0-85083422247 | - |
dc.identifier.wosid | 000528694500015 | - |
dc.identifier.bibliographicCitation | IEEE Access, v.8, pp 62197 - 62206 | - |
dc.citation.title | IEEE Access | - |
dc.citation.volume | 8 | - |
dc.citation.startPage | 62197 | - |
dc.citation.endPage | 62206 | - |
dc.type.docType | Article | - |
dc.description.isOpenAccess | Y | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalResearchArea | Engineering | - |
dc.relation.journalResearchArea | Telecommunications | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Information Systems | - |
dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
dc.relation.journalWebOfScienceCategory | Telecommunications | - |
dc.subject.keywordAuthor | Energy efficient | - |
dc.subject.keywordAuthor | channel allocation | - |
dc.subject.keywordAuthor | artificial intelligence | - |
dc.subject.keywordAuthor | reinforcement learning | - |
dc.subject.keywordAuthor | Internet of Things | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
(31538) 22, Soonchunhyang-ro, Asan-si, Chungcheongnam-do, Republic of Korea+82-41-530-1114
COPYRIGHT 2021 by SOONCHUNHYANG UNIVERSITY ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.