Friend-safe evasion attack: An adversarial example that is correctly recognized by a friendly classifier
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Kwon, Hyun | - |
dc.contributor.author | Kim, Yongchul | - |
dc.contributor.author | Park, Ki-Woong | - |
dc.contributor.author | Yoon, Hyunsoo | - |
dc.contributor.author | Choi, Daeseon | - |
dc.date.available | 2020-11-02T03:40:15Z | - |
dc.date.created | 2020-11-02 | - |
dc.date.issued | 2018-09 | - |
dc.identifier.issn | 0167-4048 | - |
dc.identifier.uri | http://scholarworks.bwise.kr/ssu/handle/2018.sw.ssu/39717 | - |
dc.description.abstract | Deep neural networks (DNNs) have been applied in several useful services, such as image recognition, intrusion detection, and pattern analysis of machine learning tasks. Recently proposed adversarial examples-slightly modified data that lead to incorrect classification are a severe threat to the security of DNNs. In some situations, however, an adversarial example might be useful, such as when deceiving an enemy classifier on the battlefield. In such a scenario, it is necessary that a friendly classifier not be deceived. In this paper, we propose a friend-safe adversarial example, meaning that the friendly machine can classify the adversarial example correctly. To produce such examples, a transformation is carried out to minimize the probability of incorrect classification by the friend and that of correct classification by the adversary. We suggest two configurations for the scheme: targeted and untargeted class attacks. We performed experiments with this scheme using the MNIST and CIFAR10 datasets. Our proposed method shows a 100% attack success rate and 100% friend accuracy with only a small distortion: 2.18 and 1.54 for the two respective MNIST configurations, and 49.02 and 27.61 for the two respective CIFAR10 configurations. Additionally, we propose a new covert channel scheme and a mixed battlefield application for consideration in further applications. (C) 2018 Elsevier Ltd. All rights reserved. | - |
dc.language | 영어 | - |
dc.language.iso | en | - |
dc.publisher | ELSEVIER ADVANCED TECHNOLOGY | - |
dc.relation.isPartOf | COMPUTERS & SECURITY | - |
dc.title | Friend-safe evasion attack: An adversarial example that is correctly recognized by a friendly classifier | - |
dc.type | Article | - |
dc.identifier.doi | 10.1016/j.cose.2018.07.015 | - |
dc.type.rims | ART | - |
dc.identifier.bibliographicCitation | COMPUTERS & SECURITY, v.78, pp.380 - 397 | - |
dc.description.journalClass | 1 | - |
dc.identifier.wosid | 000447358700026 | - |
dc.citation.endPage | 397 | - |
dc.citation.startPage | 380 | - |
dc.citation.title | COMPUTERS & SECURITY | - |
dc.citation.volume | 78 | - |
dc.contributor.affiliatedAuthor | Choi, Daeseon | - |
dc.type.docType | Review | - |
dc.description.isOpenAccess | N | - |
dc.subject.keywordAuthor | Deep Neural Network | - |
dc.subject.keywordAuthor | Evasion Attack | - |
dc.subject.keywordAuthor | Adversarial Example | - |
dc.subject.keywordAuthor | Covert Channel | - |
dc.subject.keywordAuthor | Machine Learning | - |
dc.subject.keywordPlus | DEEP NEURAL-NETWORKS | - |
dc.subject.keywordPlus | SECURITY | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Information Systems | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
Soongsil University Library 369 Sangdo-Ro, Dongjak-Gu, Seoul, Korea (06978)02-820-0733
COPYRIGHT ⓒ SOONGSIL UNIVERSITY, ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.