Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

HIDDEN CONDITIONAL ADVERSARIAL ATTACKS

Authors
Byun, JunyoungShim, KyujinGo, HyojunKim, Changick
Issue Date
2022
Publisher
IEEE
Keywords
Adversarial attack; Hidden condition
Citation
2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, pp 1306 - 1310
Pages
5
Journal Title
2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP
Start Page
1306
End Page
1310
URI
https://scholarworks.bwise.kr/cau/handle/2019.sw.cau/72007
DOI
10.1109/ICIP46576.2022.9898075
ISSN
1522-4880
Abstract
Deep neural networks are vulnerable to maliciously crafted inputs called adversarial examples. Research on unprecedented adversarial attacks is significant since it can help strengthen the reliability of neural networks by alarming potential threats against them. However, since existing adversarial attacks disturb models unconditionally, the resulting adversarial examples increase their detectability through statistical observations or human inspection. To tackle this limitation, we propose hidden conditional adversarial attacks whose resultant adversarial examples disturb models only if the input images satisfy attackers' pre-defined conditions. These hidden conditional adversarial examples have better stealthiness and controllability of their attack ability. Our experimental results on the CIFAR-10 and ImageNet datasets show their effectiveness and raise a serious concern about the vulnerability of CNNs against the novel attacks.
Files in This Item
There are no files associated with this item.
Appears in
Collections
ETC > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Byun, Junyoung photo

Byun, Junyoung
대학원 (통계데이터사이언스학과)
Read more

Altmetrics

Total Views & Downloads

BROWSE