An empirical analysis of image augmentation against model inversion attack in federated learning
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Shin, Seunghyeon | - |
dc.contributor.author | Boyapati, Mallika | - |
dc.contributor.author | Suo, Kun | - |
dc.contributor.author | Kang, Kyungtae | - |
dc.contributor.author | Son, Junggab | - |
dc.date.accessioned | 2022-07-06T02:52:47Z | - |
dc.date.available | 2022-07-06T02:52:47Z | - |
dc.date.issued | 2023-02 | - |
dc.identifier.issn | 1386-7857 | - |
dc.identifier.issn | 1573-7543 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/erica/handle/2021.sw.erica/107832 | - |
dc.description.abstract | Federated Learning (FL) is a technology that facilitates a sophisticated way to train distributed data. As the FL does not expose sensitive data in the training process, it was considered privacy-safe deep learning. However, a few recent studies proved that it is possible to expose the hidden data by exploiting the shared models only. One common solution for the data exposure is differential privacy that adds noise to hinder such an attack, however, it inevitably involves a trade-off between privacy and utility. This paper demonstrates the effectiveness of image augmentation as an alternative defense strategy that has less impact of the trade-off. We conduct comprehensive experiments on the CIFAR-10 and CIFAR-100 datasets with 14 augmentations and 9 magnitudes. As a result, the best combination of augmentation and magnitude for each image class in the datasets was discovered. Also, our results show that a well-fitted augmentation strategy can outperform differential privacy. | - |
dc.format.extent | 18 | - |
dc.language | 영어 | - |
dc.language.iso | ENG | - |
dc.publisher | Baltzer Science Publishers B.V. | - |
dc.title | An empirical analysis of image augmentation against model inversion attack in federated learning | - |
dc.type | Article | - |
dc.publisher.location | 미국 | - |
dc.identifier.doi | 10.1007/s10586-022-03596-1 | - |
dc.identifier.scopusid | 2-s2.0-85130199278 | - |
dc.identifier.wosid | 000795632400002 | - |
dc.identifier.bibliographicCitation | Cluster Computing, v.26, no.1, pp 349 - 366 | - |
dc.citation.title | Cluster Computing | - |
dc.citation.volume | 26 | - |
dc.citation.number | 1 | - |
dc.citation.startPage | 349 | - |
dc.citation.endPage | 366 | - |
dc.type.docType | Article | - |
dc.description.isOpenAccess | N | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Information Systems | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Theory & Methods | - |
dc.subject.keywordAuthor | Federated learning | - |
dc.subject.keywordAuthor | Model inversion attack | - |
dc.subject.keywordAuthor | Image augmentation | - |
dc.subject.keywordAuthor | Defensive augmentation | - |
dc.subject.keywordAuthor | Differential privacy | - |
dc.identifier.url | https://link.springer.com/article/10.1007/s10586-022-03596-1 | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
55 Hanyangdeahak-ro, Sangnok-gu, Ansan, Gyeonggi-do, 15588, Korea+82-31-400-4269 sweetbrain@hanyang.ac.kr
COPYRIGHT © 2021 HANYANG UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.