Handover Decision Making for Dense HetNets: A Reinforcement Learning Approach
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Song, Yujae | - |
dc.contributor.author | Lim, Sung Hoon | - |
dc.contributor.author | Jeon, Sang-Woon | - |
dc.date.accessioned | 2023-05-03T09:33:27Z | - |
dc.date.available | 2023-05-03T09:33:27Z | - |
dc.date.issued | 2023-04 | - |
dc.identifier.issn | 2169-3536 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/erica/handle/2021.sw.erica/112547 | - |
dc.description.abstract | In this paper, we consider the problem of decision making in the context of a dense heterogeneous network with a macro base station and multiple small base stations. We propose a deep Q-learning based algorithm that efficiently minimizes the overall energy consumption by taking into account both the energy consumption from transmission and overheads, and various network information such as channel conditions and causal association information. The proposed algorithm is designed based on the centralized training with decentralized execution (CTDE) framework in which a centralized training agent manages the replay buffer for training its deep Q-network by gathering state, action, and reward information reported from the distributed agents that execute the actions. We perform several numerical evaluations and demonstrate that the proposed algorithm provides significant energy savings over other contemporary mechanisms depending on overhead costs, especially when additional energy consumption is required for handover procedure. | - |
dc.format.extent | 15 | - |
dc.language | 영어 | - |
dc.language.iso | ENG | - |
dc.publisher | Institute of Electrical and Electronics Engineers Inc. | - |
dc.title | Handover Decision Making for Dense HetNets: A Reinforcement Learning Approach | - |
dc.type | Article | - |
dc.publisher.location | 미국 | - |
dc.identifier.doi | 10.1109/ACCESS.2023.3254557 | - |
dc.identifier.scopusid | 2-s2.0-85149808489 | - |
dc.identifier.wosid | 000953396500001 | - |
dc.identifier.bibliographicCitation | IEEE Access, v.11, pp 24737 - 24751 | - |
dc.citation.title | IEEE Access | - |
dc.citation.volume | 11 | - |
dc.citation.startPage | 24737 | - |
dc.citation.endPage | 24751 | - |
dc.type.docType | Article | - |
dc.description.isOpenAccess | Y | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalResearchArea | Engineering | - |
dc.relation.journalResearchArea | Telecommunications | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Information Systems | - |
dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
dc.relation.journalWebOfScienceCategory | Telecommunications | - |
dc.subject.keywordPlus | USER ASSOCIATION | - |
dc.subject.keywordPlus | RESOURCE-ALLOCATION | - |
dc.subject.keywordPlus | NETWORKS | - |
dc.subject.keywordPlus | ALGORITHM | - |
dc.subject.keywordPlus | QOS | - |
dc.subject.keywordAuthor | Handover | - |
dc.subject.keywordAuthor | Q-learning | - |
dc.subject.keywordAuthor | Training | - |
dc.subject.keywordAuthor | Energy consumption | - |
dc.subject.keywordAuthor | Decision making | - |
dc.subject.keywordAuthor | Resource management | - |
dc.subject.keywordAuthor | Rayleigh channels | - |
dc.subject.keywordAuthor | Deep learning | - |
dc.subject.keywordAuthor | centralized training decentralized execution | - |
dc.subject.keywordAuthor | energy minimization | - |
dc.subject.keywordAuthor | heterogeneous networks | - |
dc.subject.keywordAuthor | load balancing | - |
dc.subject.keywordAuthor | reinforcement learning | - |
dc.identifier.url | https://ieeexplore.ieee.org/document/10064273/ | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
55 Hanyangdeahak-ro, Sangnok-gu, Ansan, Gyeonggi-do, 15588, Korea+82-31-400-4269 sweetbrain@hanyang.ac.kr
COPYRIGHT © 2021 HANYANG UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.