Robustness-Aware Filter Pruning for Robust Neural Networks Against Adversarial Attacks
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Lim, Hyuntak | - |
dc.contributor.author | Roh, Si-Dong | - |
dc.contributor.author | Park, Sangki | - |
dc.contributor.author | Chung, Ki-Seok | - |
dc.date.accessioned | 2022-07-06T11:33:41Z | - |
dc.date.available | 2022-07-06T11:33:41Z | - |
dc.date.created | 2022-01-26 | - |
dc.date.issued | 2021-11 | - |
dc.identifier.issn | 2161-0363 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/hanyang/handle/2021.sw.hanyang/140383 | - |
dc.description.abstract | Today, neural networks show remarkable performance in various computer vision tasks, but they are vulnerable to adversarial attacks. By adversarial training, neural networks may improve robustness against adversarial attacks. However, it is a time-consuming and resource-intensive task. An earlier study analyzed adversarial attacks on the image features and proposed a robust dataset that would contain only features robust to the adversarial attack. By training with the robust dataset, neural networks can achieve a decent accuracy under adversarial attacks without carrying out time-consuming adversarial perturbation tasks. However, even if a network is trained with the robust dataset, it may still be vulnerable to adversarial attacks. In this paper, to overcome this limitation, we propose a new method called Robustness-Aware Filter Pruning (RFP). To the best of our knowledge, it is the first attempt to utilize a filter pruning method to enhance the robustness against the adversarial attack. In the proposed method, the filters that are involved with non-robust features are pruned. With the proposed method, 52.1 % accuracy against one of the most powerful adversarial attacks is achieved, which is 3.8% better than the previous robust dataset training while maintaining clean image test accuracy. Also, our method achieves the best performance when compared with the other filter pruning methods on robust dataset. | - |
dc.language | 영어 | - |
dc.language.iso | en | - |
dc.publisher | IEEE Computer Society | - |
dc.title | Robustness-Aware Filter Pruning for Robust Neural Networks Against Adversarial Attacks | - |
dc.type | Article | - |
dc.contributor.affiliatedAuthor | Chung, Ki-Seok | - |
dc.identifier.doi | 10.1109/MLSP52302.2021.9596121 | - |
dc.identifier.scopusid | 2-s2.0-85122827218 | - |
dc.identifier.wosid | 000764097000008 | - |
dc.identifier.bibliographicCitation | IEEE International Workshop on Machine Learning for Signal Processing, MLSP, v.2021, no.October, pp.1 - 6 | - |
dc.relation.isPartOf | IEEE International Workshop on Machine Learning for Signal Processing, MLSP | - |
dc.citation.title | IEEE International Workshop on Machine Learning for Signal Processing, MLSP | - |
dc.citation.volume | 2021 | - |
dc.citation.number | October | - |
dc.citation.startPage | 1 | - |
dc.citation.endPage | 6 | - |
dc.type.rims | ART | - |
dc.type.docType | Proceedings Paper | - |
dc.description.journalClass | 1 | - |
dc.description.isOpenAccess | N | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalResearchArea | Engineering | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Artificial Intelligence | - |
dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
dc.subject.keywordPlus | Computer vision | - |
dc.subject.keywordPlus | Statistical tests | - |
dc.subject.keywordPlus | Adversarial attack | - |
dc.subject.keywordPlus | Adversarial training | - |
dc.subject.keywordPlus | Clean images | - |
dc.subject.keywordPlus | Deep learning | - |
dc.subject.keywordPlus | Filter pruning | - |
dc.subject.keywordPlus | Image features | - |
dc.subject.keywordPlus | Knowledge IT | - |
dc.subject.keywordPlus | Neural-networks | - |
dc.subject.keywordPlus | Performance | - |
dc.subject.keywordPlus | Pruning methods | - |
dc.subject.keywordPlus | Deep learning | - |
dc.subject.keywordAuthor | Adversarial Attack | - |
dc.subject.keywordAuthor | Adversarial Training | - |
dc.subject.keywordAuthor | Deep Learning | - |
dc.subject.keywordAuthor | Filter Pruning | - |
dc.identifier.url | https://ieeexplore.ieee.org/document/9596121 | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
222, Wangsimni-ro, Seongdong-gu, Seoul, 04763, Korea+82-2-2220-1365
COPYRIGHT © 2021 HANYANG UNIVERSITY.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.