Detailed Information

Cited 9 time in webofscience Cited 0 time in scopus
Metadata Downloads

Friend-safe evasion attack: An adversarial example that is correctly recognized by a friendly classifier

Full metadata record
DC Field Value Language
dc.contributor.authorKwon, Hyun-
dc.contributor.authorKim, Yongchul-
dc.contributor.authorPark, Ki-Woong-
dc.contributor.authorYoon, Hyunsoo-
dc.contributor.authorChoi, Daeseon-
dc.date.available2020-11-02T03:40:15Z-
dc.date.created2020-11-02-
dc.date.issued2018-09-
dc.identifier.issn0167-4048-
dc.identifier.urihttp://scholarworks.bwise.kr/ssu/handle/2018.sw.ssu/39717-
dc.description.abstractDeep neural networks (DNNs) have been applied in several useful services, such as image recognition, intrusion detection, and pattern analysis of machine learning tasks. Recently proposed adversarial examples-slightly modified data that lead to incorrect classification are a severe threat to the security of DNNs. In some situations, however, an adversarial example might be useful, such as when deceiving an enemy classifier on the battlefield. In such a scenario, it is necessary that a friendly classifier not be deceived. In this paper, we propose a friend-safe adversarial example, meaning that the friendly machine can classify the adversarial example correctly. To produce such examples, a transformation is carried out to minimize the probability of incorrect classification by the friend and that of correct classification by the adversary. We suggest two configurations for the scheme: targeted and untargeted class attacks. We performed experiments with this scheme using the MNIST and CIFAR10 datasets. Our proposed method shows a 100% attack success rate and 100% friend accuracy with only a small distortion: 2.18 and 1.54 for the two respective MNIST configurations, and 49.02 and 27.61 for the two respective CIFAR10 configurations. Additionally, we propose a new covert channel scheme and a mixed battlefield application for consideration in further applications. (C) 2018 Elsevier Ltd. All rights reserved.-
dc.language영어-
dc.language.isoen-
dc.publisherELSEVIER ADVANCED TECHNOLOGY-
dc.relation.isPartOfCOMPUTERS & SECURITY-
dc.titleFriend-safe evasion attack: An adversarial example that is correctly recognized by a friendly classifier-
dc.typeArticle-
dc.identifier.doi10.1016/j.cose.2018.07.015-
dc.type.rimsART-
dc.identifier.bibliographicCitationCOMPUTERS & SECURITY, v.78, pp.380 - 397-
dc.description.journalClass1-
dc.identifier.wosid000447358700026-
dc.citation.endPage397-
dc.citation.startPage380-
dc.citation.titleCOMPUTERS & SECURITY-
dc.citation.volume78-
dc.contributor.affiliatedAuthorChoi, Daeseon-
dc.type.docTypeReview-
dc.description.isOpenAccessN-
dc.subject.keywordAuthorDeep Neural Network-
dc.subject.keywordAuthorEvasion Attack-
dc.subject.keywordAuthorAdversarial Example-
dc.subject.keywordAuthorCovert Channel-
dc.subject.keywordAuthorMachine Learning-
dc.subject.keywordPlusDEEP NEURAL-NETWORKS-
dc.subject.keywordPlusSECURITY-
dc.relation.journalResearchAreaComputer Science-
dc.relation.journalWebOfScienceCategoryComputer Science, Information Systems-
dc.description.journalRegisteredClassscie-
dc.description.journalRegisteredClassscopus-
Files in This Item
There are no files associated with this item.
Appears in
Collections
College of Information Technology > School of Software > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Choi, Daeseon photo

Choi, Daeseon
College of Information Technology (School of Software)
Read more

Altmetrics

Total Views & Downloads

BROWSE