Associative Discussion Among Generating Adversarial Samples Using Evolutionary Algorithm and Samples Generated Using GANopen access
- Authors
- Pavate, Aruna; Bansode, Rajesh; Srinivasu, Parvathaneni Naga; Shafi, Jana; Choi, Jaeyoung; Ijaz, Muhammad Fazal
- Issue Date
- Dec-2023
- Publisher
- IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
- Keywords
- Adversarial examples; attacks; differential evolutionary algorithm; deep neural networks; generative adversary networks; optimization methods
- Citation
- IEEE ACCESS, v.11, pp 143757 - 143770
- Pages
- 14
- Journal Title
- IEEE ACCESS
- Volume
- 11
- Start Page
- 143757
- End Page
- 143770
- URI
- https://scholarworks.bwise.kr/gachon/handle/2020.sw.gachon/90077
- DOI
- 10.1109/ACCESS.2023.3343754
- ISSN
- 2169-3536
- Abstract
- The remarkable accomplishments of deep neural networks (DNN) have led to their widespread adoption in various contexts, including safety-critical applications. Many strategies have been implemented to generate adversarial samples using DNN, raising the question of the security of the model. Adding slight magnitude noise to the input samples during training or testing can misguide DNN to produce different results than the actual one. DNNs are sensitive to indiscernible adversarial samples but readily identifiable by them. Currently, gradient-based approaches are used to generate adversarial samples. Gradient-based methods require internal details of the model, such as several parameters, model type, Etc. Usually, these details are practically unavailable, and calculating the gradient for non-differential models is impossible. In this work, we propose a novel DESapsDE framework based on evolutionary algorithms to generate adversarial samples from the probability of labels. We also incorporated the discussion with the various Generative Adversarial Networks (GANs) models, such as ACGAN, DCGAN, and SAGAN. It has been observed that GANs differ from adversarial sample generation methods and can be applied as defense mechanisms. The proposed method reduced model confidence to 13.09% for the ResNet50 model, 30.34% for the WideResNet model, and 23.1% for the DenseNet model, with an FID score of 16.45. The proposed model varies from the GAN model. It applies to attack-on-network models as a preventive major to make the model robust.
- Files in This Item
- There are no files associated with this item.
- Appears in
Collections - ETC > 1. Journal Articles
![qrcode](https://api.qrserver.com/v1/create-qr-code/?size=55x55&data=https://scholarworks.bwise.kr/gachon/handle/2020.sw.gachon/90077)
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.