OISE: Optimized Input Sampling Explanation with a Saliency Map Based on the Black-Box Modelopen access
- Authors
- Wang, Zhan; Joe, Inwhee
- Issue Date
- May-2023
- Publisher
- MDPI
- Keywords
- black-box model; explanation; importance; mask; saliency map; XAI
- Citation
- Applied Sciences (Switzerland), v.13, no.10, pp.1 - 14
- Indexed
- SCIE
SCOPUS
- Journal Title
- Applied Sciences (Switzerland)
- Volume
- 13
- Number
- 10
- Start Page
- 1
- End Page
- 14
- URI
- https://scholarworks.bwise.kr/hanyang/handle/2021.sw.hanyang/191939
- DOI
- 10.3390/app13105886
- ISSN
- 2076-3417
- Abstract
- With the development of artificial intelligence technology, machine learning models are becoming more complex and accurate. However, the explainability of the models is decreasing, and much of the decision process is still unclear and difficult to explain to users. Therefore, we now often use Explainable Artificial Intelligence (XAI) techniques to make models transparent and explainable. For an image, the ability to recognize its content is one of the major contributions of XAI techniques to image recognition. Visual methods for describing classification decisions within an image are usually expressed in terms of salience to indicate the importance of each pixel. In some approaches, explainability is achieved by deforming and integrating white-box models, which limits the use of specific network architectures. Therefore, in contrast to white-box model-based approaches that use weights or other internal network states to estimate pixel saliency, we propose the Optimized Input Sampling Explanation (OISE) technique based on black-box models. OISE uses masks to generate saliency maps that reflect the importance of each pixel to the model predictions, and employs black-box models to empirically infer the importance of each pixel. We evaluate our method using deleted/inserted pixels, and extensive experiments on several basic datasets show that OISE achieves better visual performance and fairness in explaining the decision process compared to the performance of other methods. This approach makes the decision process clearly visible, makes the model transparent and explainable, and serves to explain it to users.
- Files in This Item
-
- Appears in
Collections - 서울 공과대학 > 서울 컴퓨터소프트웨어학부 > 1. Journal Articles
![qrcode](https://api.qrserver.com/v1/create-qr-code/?size=55x55&data=https://scholarworks.bwise.kr/hanyang/handle/2021.sw.hanyang/191939)
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.