ADAPTIVE WARPING NETWORK FOR TRANSFERABLE ADVERSARIAL ATTACKS
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Son, Minji | - |
dc.contributor.author | Kwon, Myung-Joon | - |
dc.contributor.author | Kim, Hee-Seon | - |
dc.contributor.author | Byun, Junyoung | - |
dc.contributor.author | Cho, Seungju | - |
dc.contributor.author | Kim, Changick | - |
dc.date.accessioned | 2024-02-14T01:30:24Z | - |
dc.date.available | 2024-02-14T01:30:24Z | - |
dc.date.issued | 2022 | - |
dc.identifier.issn | 1522-4880 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/cau/handle/2019.sw.cau/72003 | - |
dc.description.abstract | Deep Neural Networks (DNNs) are extremely susceptible to adversarial examples, which are crafted by intentionally adding imperceptible perturbations to clean images. Due to potential threats of adversarial attacks in practice, black-box transfer-based attacks are carefully studied to identify the vulnerability of DNNs. Unfortunately, transfer-based attacks often fail to achieve high transferability because the adversarial examples tend to overfit the source model. Applying input transformation is one of the most effective methods to avoid such overfitting. However, most previous input transformation methods obtain limited transferability because these methods utilize fixed transformations for all images. To solve the problem, we propose an Adaptive Warping Network (AWN), which searches for appropriate warping to the individual data. Specifically, AWN optimizes the warping, which mitigates the effect of adversarial perturbations in each iteration. The adversarial examples are generated to become robust against such strong transformations. Extensive experimental results on the ImageNet dataset demonstrate that AWN outperforms the existing input transformation methods in terms of transferability. | - |
dc.format.extent | 5 | - |
dc.language | 영어 | - |
dc.language.iso | ENG | - |
dc.publisher | IEEE | - |
dc.title | ADAPTIVE WARPING NETWORK FOR TRANSFERABLE ADVERSARIAL ATTACKS | - |
dc.type | Article | - |
dc.identifier.doi | 10.1109/ICIP46576.2022.9897701 | - |
dc.identifier.bibliographicCitation | 2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, pp 3056 - 3060 | - |
dc.description.isOpenAccess | N | - |
dc.identifier.wosid | 001058109503030 | - |
dc.identifier.scopusid | 2-s2.0-85146732188 | - |
dc.citation.endPage | 3060 | - |
dc.citation.startPage | 3056 | - |
dc.citation.title | 2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP | - |
dc.type.docType | Proceedings Paper | - |
dc.publisher.location | 미국 | - |
dc.subject.keywordAuthor | Adversarial Attacks | - |
dc.subject.keywordAuthor | Transfer-based Attacks | - |
dc.subject.keywordAuthor | Transferability | - |
dc.subject.keywordAuthor | Input Transformation | - |
dc.subject.keywordAuthor | Warping | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalResearchArea | Engineering | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Artificial Intelligence | - |
dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
dc.description.journalRegisteredClass | scopus | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
84, Heukseok-ro, Dongjak-gu, Seoul, Republic of Korea (06974)02-820-6194
COPYRIGHT 2019 Chung-Ang University All Rights Reserved.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.