A Communication Efficient Approach of Global Training in Federated Learning
- Authors
- Bhatti, Dost Muhammad Saqib; Haris, Muhammad; Nam, Haewoon
- Issue Date
- Oct-2022
- Publisher
- IEEE Computer Society
- Keywords
- deep learning; distributed learning; Federated learning
- Citation
- International Conference on ICT Convergence, v.2022-October, pp 1441 - 1446
- Pages
- 6
- Indexed
- SCOPUS
- Journal Title
- International Conference on ICT Convergence
- Volume
- 2022-October
- Start Page
- 1441
- End Page
- 1446
- URI
- https://scholarworks.bwise.kr/erica/handle/2021.sw.erica/113629
- DOI
- 10.1109/ICTC55196.2022.9952661
- ISSN
- 2162-1233
- Abstract
- Federated learning is a privacy preserving method of training the model on server by utilizing the end users' private data without accessing it. The central server shares the global model with all end users, called clients of the network. The clients are required to train the shared global model using their local datasets. The updated local trained models are forwarded back to the server to further update the global model. This process of training the global model is carried out for several rounds. The procedure of updating the local model and transmitting back to the server rises the communication cost. Since several clients are involved in training the global model, the aggregated communication cost of the network is escalated. This article proposes a communication effective aggregation method for federated learning, which considers the volume and variety of local clients' data before aggregation. The proposed approach is compared with the conventional methods and it achieves highest accuracy and minimum loss with respect to aggregated communication cost. © 2022 IEEE.
- Files in This Item
-
Go to Link
- Appears in
Collections - COLLEGE OF ENGINEERING SCIENCES > SCHOOL OF ELECTRICAL ENGINEERING > 1. Journal Articles
![qrcode](https://api.qrserver.com/v1/create-qr-code/?size=55x55&data=https://scholarworks.bwise.kr/erica/handle/2021.sw.erica/113629)
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.