Aliasing Backdoor Attacks on Pre-trained Models
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Wei, Cheng'an | - |
dc.contributor.author | Lee, Yeonjoon | - |
dc.contributor.author | Chen, Kai | - |
dc.contributor.author | Meng, Guozhu | - |
dc.contributor.author | Lv, Peizhuo | - |
dc.date.accessioned | 2024-01-20T09:02:59Z | - |
dc.date.available | 2024-01-20T09:02:59Z | - |
dc.date.issued | 2023-08 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/erica/handle/2021.sw.erica/117847 | - |
dc.description.abstract | Pre-trained deep learning models are widely used to train accurate models with limited data in a short time. To reduce computational costs, pre-trained neural networks often employ subsampling operations. However, recent studies have shown that these subsampling operations can cause aliasing issues, resulting in problems with generalization. Despite this knowledge, there is still a lack of research on the relationship between the aliasing of neural networks and security threats, such as adversarial attacks and backdoor attacks, which manipulate model predictions without the awareness of victims. In this paper, we propose the aliasing backdoor, a low-cost and data-free attack that threatens mainstream pre-trained models and transfers to all student models fine-tuned from them. The key idea is to create an aliasing error in the strided layers of the network and manipulate a benign input to a targeted intermediate representation. To evaluate the attack, we conduct experiments on image classification, face recognition, and speech recognition tasks. The results show that our approach can effectively attack mainstream models with a success rate of over 95%. Our research, based on the aliasing error caused by subsampling, reveals a fundamental security weakness of strided layers, which are widely used in modern neural network architectures. To the best of our knowledge, this is the first work to exploit the strided layers to launch backdoor attacks. © USENIX Security 2023. All rights reserved. | - |
dc.format.extent | 18 | - |
dc.language | 영어 | - |
dc.language.iso | ENG | - |
dc.publisher | USENIX Association | - |
dc.title | Aliasing Backdoor Attacks on Pre-trained Models | - |
dc.type | Article | - |
dc.publisher.location | 미국 | - |
dc.identifier.scopusid | 2-s2.0-85172793291 | - |
dc.identifier.wosid | 001066451502047 | - |
dc.identifier.bibliographicCitation | 32nd USENIX Security Symposium, USENIX Security 2023, v.4, pp 2707 - 2724 | - |
dc.citation.title | 32nd USENIX Security Symposium, USENIX Security 2023 | - |
dc.citation.volume | 4 | - |
dc.citation.startPage | 2707 | - |
dc.citation.endPage | 2724 | - |
dc.type.docType | Conference paper | - |
dc.description.isOpenAccess | N | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Information Systems | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Interdisciplinary Applications | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Theory & Methods | - |
dc.identifier.url | https://www.usenix.org/conference/usenixsecurity23/presentation/wei-chengan | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
55 Hanyangdeahak-ro, Sangnok-gu, Ansan, Gyeonggi-do, 15588, Korea+82-31-400-4269 sweetbrain@hanyang.ac.kr
COPYRIGHT © 2021 HANYANG UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.