Alerting the Impact of Adversarial Attacks and How to Detect it Effectively via Machine Learning Approach: With Financial and ESG Data
- Authors
- Lee, Ook; Ha, Hyodong; Choi, Hayoung; Joo, Hanseon; Cheon, Minjong
- Issue Date
- Aug-2022
- Publisher
- Springer Science and Business Media Deutschland GmbH
- Keywords
- ESG; Gaussian noise; Light gradient boosting machine; Local outlier factors; Machine learning
- Citation
- Lecture Notes in Networks and Systems, v.461, pp.713 - 724
- Indexed
- SCOPUS
- Journal Title
- Lecture Notes in Networks and Systems
- Volume
- 461
- Start Page
- 713
- End Page
- 724
- URI
- https://scholarworks.bwise.kr/hanyang/handle/2021.sw.hanyang/172572
- DOI
- 10.1007/978-981-19-2130-8_55
- ISSN
- 2367-3370
- Abstract
- ESG is short for “environmental, social, and governance” and it has been widely used as an indicator for investment. Investment firms asserted that they would incorporate the ESG indicator into their portfolio. Therefore, various AI approaches were applied to analyze the relationship between ESG and the benefit of the company. However, the adversarial attacks on AI models were prominent these days, and they could badly affect financial performance. This research aims to alert the danger of the attack and how to detect the anomaly data. The experiment involves two stages and focuses on classification performance and detecting noise data, respectively. In the first stage, it revealed that the accuracy of the classification on the noise dataset could drop almost 15% compared to the ordinary dataset. In the second stage, local outlier factors and isolation forest algorithms were applied to detect the noise data and they revealed 95.156% and 84.1% for detecting, respectively. These experiments yield that even tiny values of noise could influence the result significantly, and suggest a way to detect them. The limitation of this research is that it only conducts uncomplicated binary classification, and could not propose a way to defend the attack or filter the noise. Further research should be conducted to apply these algorithms to more sophisticated classification or regression. However, it is still worthwhile that this research could alert the risk of the adversarial attack on AI models, and suggest a probability of applying the proposed model to other fields of research.
- Files in This Item
-
Go to Link
- Appears in
Collections - 서울 공과대학 > 서울 정보시스템학과 > 1. Journal Articles
![qrcode](https://api.qrserver.com/v1/create-qr-code/?size=55x55&data=https://scholarworks.bwise.kr/hanyang/handle/2021.sw.hanyang/172572)
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.