Can We Trust the Actionable Guidance from Explainable AI Techniques in Defect Prediction?
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Scott Uk-Jin Lee | - |
dc.date.accessioned | 2025-04-02T08:00:53Z | - |
dc.date.available | 2025-04-02T08:00:53Z | - |
dc.date.issued | 2025-03 | - |
dc.identifier.issn | 1534-5351 | - |
dc.identifier.uri | https://scholarworks.bwise.kr/erica/handle/2021.sw.erica/123684 | - |
dc.description.abstract | Despite advances in high-performance Software Defect Prediction (SDP) models, practitioners remain hesitant to adopt them due to opaque decision-making and a lack of actionable insights. Recent research has applied various explainable AI (XAI) techniques to provide explainable and actionable guidance for SDP results to address these limitations, but the trustworthiness of such guidance for practitioners has not been sufficiently investigated. Practitioners may question the feasibility of implementing the proposed changes, and if these changes fail to resolve predicted defects or prove inaccurate, their trust in the guidance may diminish. In this study, we empirically evaluate the effectiveness of current XAI approaches for SDP across 32 releases of 9 large-scale projects, focusing on whether the guidance meets practitioners’ expectations. Our findings reveal that their actionable guidance (i) does not guarantee that predicted defects are resolved; (ii) fails to pinpoint modifications required to resolve predicted defects; and (iii) deviates from the typical code changes practitioners make in their projects. These limitations indicate that the guidance is not yet reliable enough for developers to justify investing their limited debugging resources. We suggest that future XAI research for SDP incorporate feedback loops that offer clear rewards for practitioners’ efforts, and propose a potential alternative approach utilizing counterfactual explanations. | - |
dc.format.extent | 7 | - |
dc.language | 영어 | - |
dc.language.iso | ENG | - |
dc.publisher | IEEE | - |
dc.title | Can We Trust the Actionable Guidance from Explainable AI Techniques in Defect Prediction? | - |
dc.type | Article | - |
dc.identifier.doi | 10.1109/SANER64311.2025.00051 | - |
dc.identifier.scopusid | 2-s2.0-105007294522 | - |
dc.identifier.wosid | 001506888600043 | - |
dc.identifier.bibliographicCitation | IEEE International Conference on Software Analysis, Evolution and Reengineering, pp 476 - 482 | - |
dc.citation.title | IEEE International Conference on Software Analysis, Evolution and Reengineering | - |
dc.citation.startPage | 476 | - |
dc.citation.endPage | 482 | - |
dc.type.docType | Proceedings Paper | - |
dc.description.isOpenAccess | N | - |
dc.description.journalRegisteredClass | scie | - |
dc.description.journalRegisteredClass | scopus | - |
dc.relation.journalResearchArea | Computer Science | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Interdisciplinary Applications | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Software Engineering | - |
dc.relation.journalWebOfScienceCategory | Computer Science, Theory & Methods | - |
dc.subject.keywordAuthor | Actionable Analytics | - |
dc.subject.keywordAuthor | Explainable AI | - |
dc.subject.keywordAuthor | Model-Agnostic Techniques | - |
dc.subject.keywordAuthor | Software Defect Prediction | - |
dc.identifier.url | https://conf.researchr.org/details/saner-2025/saner-2025-papers/20/Can-We-Trust-the-Actionable-Guidance-from-Explainable-AI-Techniques-in-Defect-Predict | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
55 Hanyangdeahak-ro, Sangnok-gu, Ansan, Gyeonggi-do, 15588, Korea+82-31-400-4269 sweetbrain@hanyang.ac.kr
COPYRIGHT © 2021 HANYANG UNIVERSITY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.