Cluster Federated Optimization Model Based on Hyperledger Fabric
- Authors
- Li, Y.; Yu, H.; Yin, Y.; Gao, H.
- Issue Date
- Jan-2023
- Publisher
- Editorial Office of Computer Engineering
- Keywords
- Blockchain; Federated Learning(FL); Hyperledger Fabric; privacy protection; smart contract
- Citation
- Jisuanji Gongcheng/Computer Engineering, v.49, no.1, pp.22 - 30
- Journal Title
- Jisuanji Gongcheng/Computer Engineering
- Volume
- 49
- Number
- 1
- Start Page
- 22
- End Page
- 30
- URI
- https://scholarworks.bwise.kr/gachon/handle/2020.sw.gachon/86816
- DOI
- 10.19678/j.issn.1000-3428.0064301
- ISSN
- 1000-3428
- Abstract
- As a distributed machine learning framework,Federated Learning(FL) achieves collaborative training by sharing model parameters without leaving the data locally and ensures privacy protection to a certain extent. However, several challenges are encountered in FL,such as potential malicious client gradient attacks,inability of the central parameter server to cope with a single point of failure,and poor training performance due to skewed client data distribution. In response to the above-mentioned problems,the decentralized blockchain technology is combined with FL and a cluster federated optimization model is proposed based on Hyperledger Fabric. The model uses Hyperledger Fabric as the architectural basis for distributed training. After the client is initialized,the local training transmits model parameters and distribution information to the Hyperledger Fabric. The training performance of FL under the Non-IID distribution of client data is optimized through clustering;then,a client is randomly elected to become the leader;the leader replaces the central server. The leader clusters and downloads the model parameter aggregation according to the distribution similarity and cosine similarity. The client obtains the aggregation model and iterative training continues. On the EMNIST dataset,the average accuracy of the proposed model is 79.26% under Non-IID distribution of data,which is 17.26% higher than that of FedAvg. For a high rate,it requires 36.3% less time than the communication round of Cluster Federated Learning(CFL)training to convergence. © 2023, Editorial Office of Computer Engineering. All rights reserved.
- Files in This Item
- There are no files associated with this item.
- Appears in
Collections - ETC > 1. Journal Articles
![qrcode](https://api.qrserver.com/v1/create-qr-code/?size=55x55&data=https://scholarworks.bwise.kr/gachon/handle/2020.sw.gachon/86816)
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.