Digestive neural networks: A novel defense strategy against inference attacks in federated learning
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Lee, Hongkyu | - |
dc.contributor.author | Kim, Jeehyeong | - |
dc.contributor.author | Ahn, Seyoung | - |
dc.contributor.author | Hussain, Rasheed | - |
dc.contributor.author | Cho, Sunghyun | - |
dc.contributor.author | Son, Junggab | - |
dc.date.accessioned | 2022-12-20T04:35:25Z | - |
dc.date.available | 2022-12-20T04:35:25Z | - |
dc.date.issued | 2021-10 | - |
dc.identifier.issn | 0167-4048 | - |
dc.identifier.issn | 1872-6208 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/erica/handle/2021.sw.erica/111172 | - |
dc.description.abstract | Federated Learning (FL) is an efficient and secure machine learning technique designed for decentralized computing systems such as fog and edge computing. Its learning process employs frequent communications as the participating local devices send updates, either gradients or parameters of their models, to a central server that aggregates them and redistributes new weights to the devices. In FL, private data does not leave the individual local devices, and thus, rendered as a robust solution in terms of privacy preservation. However, the recently introduced membership inference attacks pose a critical threat to the impeccability of FL mechanisms. By eavesdropping only on the updates transferring to the center server, these attacks can recover the private data of a local device. A prevalent solution against such attacks is the differential privacy scheme that augments a sufficient amount of noise to each update to hinder the recovering process. However, it suffers from a significant sacrifice in the classification accuracy of the FL. To effectively alleviate the problem, this paper proposes a Digestive Neural Network (DNN), an independent neural network attached to the FL. The private data owned by each device will pass through the DNN and then train the FL. The DNN modifies the input data, which results in distorting updates, in a way to maximize the classification accuracy of FL while the accuracy of inference attacks is minimized. Our simulation result shows that the proposed DNN shows significant performance on both gradient sharing-and weight sharing-based FL mechanisms. For the gradient sharing, the DNN achieved higher classification accuracy by 16.17% while 9% lower attack accuracy than the existing differential privacy schemes. For the weight sharing FL scheme, the DNN achieved at most 46.68% lower attack success rate with 3% higher classification accuracy. (c) 2021 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license ( http://creativecommons.org/licenses/by-nc-nd/4.0/ ) | - |
dc.format.extent | 20 | - |
dc.language | 영어 | - |
dc.language.iso | ENG | - |
dc.publisher | Pergamon Press Ltd. | - |
dc.title | Digestive neural networks: A novel defense strategy against inference attacks in federated learning | - |
dc.type | Article | - |
dc.publisher.location | 영국 | - |
dc.identifier.doi | 10.1016/j.cose.2021.102378 | - |
dc.identifier.scopusid | 2-s2.0-85109215987 | - |
dc.identifier.wosid | 000685459300010 | - |
dc.identifier.bibliographicCitation | Computers and Security, v.109, pp 1 - 20 | - |
dc.citation.title | Computers and Security | - |
dc.citation.volume | 109 | - |
dc.citation.startPage | 1 | - |
dc.citation.endPage | 20 | - |
dc.type.docType | Article | - |
dc.description.isOpenAccess | N | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Information Systems | - |
dc.subject.keywordPlus | AI Security | - |
dc.subject.keywordPlus | Digestive neural networks | - |
dc.subject.keywordPlus | Federated learning (FL) | - |
dc.subject.keywordPlus | Federated learning security | - |
dc.subject.keywordPlus | Inference attack | - |
dc.subject.keywordPlus | ML Security | - |
dc.subject.keywordPlus | t-SNE analysis | - |
dc.subject.keywordPlus | White-box assumption | - |
dc.subject.keywordAuthor | Federated learning (FL) | - |
dc.subject.keywordAuthor | Inference attack | - |
dc.subject.keywordAuthor | White-box assumption | - |
dc.subject.keywordAuthor | Digestive neural networks | - |
dc.subject.keywordAuthor | t-SNE analysis | - |
dc.subject.keywordAuthor | Federated learning security | - |
dc.subject.keywordAuthor | ML Security | - |
dc.subject.keywordAuthor | AI Security | - |
dc.identifier.url | https://www.sciencedirect.com/science/article/pii/S0167404821002029?via%3Dihub | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
55 Hanyangdeahak-ro, Sangnok-gu, Ansan, Gyeonggi-do, 15588, Korea+82-31-400-4269 sweetbrain@hanyang.ac.kr
COPYRIGHT © 2021 HANYANG UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.