A DDPG-based energy efficient federated learning algorithm with SWIPT and MC-NOMA
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Ho, Manh Cuong | - |
dc.contributor.author | Tran, Anh Tien | - |
dc.contributor.author | Lee, Donghyun | - |
dc.contributor.author | Paek, Jeongyeup | - |
dc.contributor.author | Noh, Wonjong | - |
dc.contributor.author | Cho, Sungrae | - |
dc.date.accessioned | 2024-02-08T03:00:18Z | - |
dc.date.available | 2024-02-08T03:00:18Z | - |
dc.date.issued | 2024 | - |
dc.identifier.issn | 2405-9595 | - |
dc.identifier.issn | 2405-9595 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/cau/handle/2019.sw.cau/71854 | - |
dc.description.abstract | Federated learning (FL) has emerged as a promising distributed machine learning technique. It has the potential to play a key role in future Internet of Things (IoT) networks by ensuring the security and privacy of user data combined with efficient utilization of communication resources. This paper addresses the challenge of maximizing energy efficiency in FL systems. We employed simultaneous wireless information and power transfer (SWIPT) and multi-carrier non-orthogonal multiple access (MC-NOMA) techniques. Also, we jointly optimized power allocation and central processing unit (CPU) resource allocation to minimize latency-constrained energy consumption. We formulated an optimization problem using a Markov decision process (MDP) and utilized a deep deterministic policy gradient (DDPG) reinforcement learning algorithm to solve our MDP problem. We tested the proposed algorithm through extensive simulations and confirmed it converges in a stable manner and provides enhanced energy efficiency compared to conventional schemes. © 2023 The Authors | - |
dc.format.extent | 8 | - |
dc.language | 영어 | - |
dc.language.iso | ENG | - |
dc.publisher | Korean Institute of Communications and Information Sciences | - |
dc.title | A DDPG-based energy efficient federated learning algorithm with SWIPT and MC-NOMA | - |
dc.type | Article | - |
dc.identifier.doi | 10.1016/j.icte.2023.12.001 | - |
dc.identifier.bibliographicCitation | ICT Express, v.10, no.3, pp 600 - 607 | - |
dc.identifier.kciid | ART003089808 | - |
dc.description.isOpenAccess | Y | - |
dc.identifier.scopusid | 2-s2.0-85180451249 | - |
dc.citation.endPage | 607 | - |
dc.citation.number | 3 | - |
dc.citation.startPage | 600 | - |
dc.citation.title | ICT Express | - |
dc.citation.volume | 10 | - |
dc.type.docType | Article in press | - |
dc.publisher.location | 대한민국 | - |
dc.subject.keywordAuthor | Deep reinforcement learning | - |
dc.subject.keywordAuthor | Federated learning | - |
dc.subject.keywordAuthor | Multi-carrier non-orthogonal multiple access | - |
dc.subject.keywordAuthor | SWIPT | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.description.journalRegisteredClass | kci | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
84, Heukseok-ro, Dongjak-gu, Seoul, Republic of Korea (06974)02-820-6194
COPYRIGHT 2019 Chung-Ang University All Rights Reserved.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.