HIDDEN CONDITIONAL ADVERSARIAL ATTACKS
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Byun, Junyoung | - |
dc.contributor.author | Shim, Kyujin | - |
dc.contributor.author | Go, Hyojun | - |
dc.contributor.author | Kim, Changick | - |
dc.date.accessioned | 2024-02-14T01:30:28Z | - |
dc.date.available | 2024-02-14T01:30:28Z | - |
dc.date.issued | 2022 | - |
dc.identifier.issn | 1522-4880 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/cau/handle/2019.sw.cau/72007 | - |
dc.description.abstract | Deep neural networks are vulnerable to maliciously crafted inputs called adversarial examples. Research on unprecedented adversarial attacks is significant since it can help strengthen the reliability of neural networks by alarming potential threats against them. However, since existing adversarial attacks disturb models unconditionally, the resulting adversarial examples increase their detectability through statistical observations or human inspection. To tackle this limitation, we propose hidden conditional adversarial attacks whose resultant adversarial examples disturb models only if the input images satisfy attackers' pre-defined conditions. These hidden conditional adversarial examples have better stealthiness and controllability of their attack ability. Our experimental results on the CIFAR-10 and ImageNet datasets show their effectiveness and raise a serious concern about the vulnerability of CNNs against the novel attacks. | - |
dc.format.extent | 5 | - |
dc.language | 영어 | - |
dc.language.iso | ENG | - |
dc.publisher | IEEE | - |
dc.title | HIDDEN CONDITIONAL ADVERSARIAL ATTACKS | - |
dc.type | Article | - |
dc.identifier.doi | 10.1109/ICIP46576.2022.9898075 | - |
dc.identifier.bibliographicCitation | 2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, pp 1306 - 1310 | - |
dc.description.isOpenAccess | N | - |
dc.identifier.wosid | 001058109501079 | - |
dc.identifier.scopusid | 2-s2.0-85146705166 | - |
dc.citation.endPage | 1310 | - |
dc.citation.startPage | 1306 | - |
dc.citation.title | 2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP | - |
dc.type.docType | Proceedings Paper | - |
dc.publisher.location | 미국 | - |
dc.subject.keywordAuthor | Adversarial attack | - |
dc.subject.keywordAuthor | Hidden condition | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalResearchArea | Engineering | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Artificial Intelligence | - |
dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
dc.description.journalRegisteredClass | scopus | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
84, Heukseok-ro, Dongjak-gu, Seoul, Republic of Korea (06974)02-820-6194
COPYRIGHT 2019 Chung-Ang University All Rights Reserved.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.