On Defensive Neural Networks Against Inference Attack in Federated Learning
- Authors
- Lee, Hongkyu; Kim, Jeehyeong; Hussain, Rasheed; Cho, Sunghyun; Son, Junggab
- Issue Date
- Jun-2021
- Publisher
- IEEE
- Keywords
- Federated Learning; Inference Attack; Deep Learning; Edge Computing; Differential Privacy
- Citation
- ICC 2021 - IEEE International Conference on Communications, pp 1 - 6
- Pages
- 6
- Indexed
- SCIE
SCOPUS
- Journal Title
- ICC 2021 - IEEE International Conference on Communications
- Start Page
- 1
- End Page
- 6
- URI
- https://scholarworks.bwise.kr/erica/handle/2021.sw.erica/108262
- DOI
- 10.1109/ICC42927.2021.9500936
- Abstract
- Federated Learning (FL) is a promising technique for edge computing environments as it provides better data privacy protection. It enables each edge node in the system to send a central server a computed value, named gradient, rather than sending raw data. However, recent research results show that the FL is still vulnerable to an inference attack, which is an adversarial algorithm that is capable of identifying the data used to compute the gradient. One prevalent mitigation strategy is differential privacy which computes a gradient with noised data, but this causes another problem that is accuracy degradation. To effectively deal with this problem, this paper proposes a new digestive neural network (DNN) and integrates it into FL. The proposed scheme distorts raw data by DNN to make it unrecognizable then computes a gradient by a classification network. The gradients generated by edge nodes will be sent to the server to complete a trained model. The simulation results show that the proposed scheme has 9.31% higher classification accuracy and 19.25% lower attack accuracy on average than the differential private schemes.
- Files in This Item
-
Go to Link
- Appears in
Collections - COLLEGE OF COMPUTING > ERICA 컴퓨터학부 > 1. Journal Articles
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.