Alerting the Impact of Adversarial Attacks and How to Detect it Effectively via Machine Learning Approach: With Financial and ESG Data
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Lee, Ook | - |
dc.contributor.author | Ha, Hyodong | - |
dc.contributor.author | Choi, Hayoung | - |
dc.contributor.author | Joo, Hanseon | - |
dc.contributor.author | Cheon, Minjong | - |
dc.date.accessioned | 2022-10-25T07:41:46Z | - |
dc.date.available | 2022-10-25T07:41:46Z | - |
dc.date.created | 2022-10-06 | - |
dc.date.issued | 2022-08 | - |
dc.identifier.issn | 2367-3370 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/hanyang/handle/2021.sw.hanyang/172572 | - |
dc.description.abstract | ESG is short for “environmental, social, and governance” and it has been widely used as an indicator for investment. Investment firms asserted that they would incorporate the ESG indicator into their portfolio. Therefore, various AI approaches were applied to analyze the relationship between ESG and the benefit of the company. However, the adversarial attacks on AI models were prominent these days, and they could badly affect financial performance. This research aims to alert the danger of the attack and how to detect the anomaly data. The experiment involves two stages and focuses on classification performance and detecting noise data, respectively. In the first stage, it revealed that the accuracy of the classification on the noise dataset could drop almost 15% compared to the ordinary dataset. In the second stage, local outlier factors and isolation forest algorithms were applied to detect the noise data and they revealed 95.156% and 84.1% for detecting, respectively. These experiments yield that even tiny values of noise could influence the result significantly, and suggest a way to detect them. The limitation of this research is that it only conducts uncomplicated binary classification, and could not propose a way to defend the attack or filter the noise. Further research should be conducted to apply these algorithms to more sophisticated classification or regression. However, it is still worthwhile that this research could alert the risk of the adversarial attack on AI models, and suggest a probability of applying the proposed model to other fields of research. | - |
dc.language | 영어 | - |
dc.language.iso | en | - |
dc.publisher | Springer Science and Business Media Deutschland GmbH | - |
dc.title | Alerting the Impact of Adversarial Attacks and How to Detect it Effectively via Machine Learning Approach: With Financial and ESG Data | - |
dc.type | Article | - |
dc.contributor.affiliatedAuthor | Lee, Ook | - |
dc.identifier.doi | 10.1007/978-981-19-2130-8_55 | - |
dc.identifier.scopusid | 2-s2.0-85136935486 | - |
dc.identifier.bibliographicCitation | Lecture Notes in Networks and Systems, v.461, pp.713 - 724 | - |
dc.relation.isPartOf | Lecture Notes in Networks and Systems | - |
dc.citation.title | Lecture Notes in Networks and Systems | - |
dc.citation.volume | 461 | - |
dc.citation.startPage | 713 | - |
dc.citation.endPage | 724 | - |
dc.type.rims | ART | - |
dc.type.docType | Conference Paper | - |
dc.description.journalClass | 1 | - |
dc.description.isOpenAccess | N | - |
dc.description.journalRegisteredClass | scopus | - |
dc.subject.keywordAuthor | ESG | - |
dc.subject.keywordAuthor | Gaussian noise | - |
dc.subject.keywordAuthor | Light gradient boosting machine | - |
dc.subject.keywordAuthor | Local outlier factors | - |
dc.subject.keywordAuthor | Machine learning | - |
dc.identifier.url | https://link.springer.com/chapter/10.1007/978-981-19-2130-8_55 | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
222, Wangsimni-ro, Seongdong-gu, Seoul, 04763, Korea+82-2-2220-1365
COPYRIGHT © 2021 HANYANG UNIVERSITY.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.