Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Can We Trust the Actionable Guidance from Explainable AI Techniques in Defect Prediction?

Full metadata record
DC Field Value Language
dc.contributor.authorScott Uk-Jin Lee-
dc.date.accessioned2025-04-02T08:00:53Z-
dc.date.available2025-04-02T08:00:53Z-
dc.date.issued2025-03-
dc.identifier.issn1534-5351-
dc.identifier.urihttps://scholarworks.bwise.kr/erica/handle/2021.sw.erica/123684-
dc.description.abstractDespite advances in high-performance Software Defect Prediction (SDP) models, practitioners remain hesitant to adopt them due to opaque decision-making and a lack of actionable insights. Recent research has applied various explainable AI (XAI) techniques to provide explainable and actionable guidance for SDP results to address these limitations, but the trustworthiness of such guidance for practitioners has not been sufficiently investigated. Practitioners may question the feasibility of implementing the proposed changes, and if these changes fail to resolve predicted defects or prove inaccurate, their trust in the guidance may diminish. In this study, we empirically evaluate the effectiveness of current XAI approaches for SDP across 32 releases of 9 large-scale projects, focusing on whether the guidance meets practitioners’ expectations. Our findings reveal that their actionable guidance (i) does not guarantee that predicted defects are resolved; (ii) fails to pinpoint modifications required to resolve predicted defects; and (iii) deviates from the typical code changes practitioners make in their projects. These limitations indicate that the guidance is not yet reliable enough for developers to justify investing their limited debugging resources. We suggest that future XAI research for SDP incorporate feedback loops that offer clear rewards for practitioners’ efforts, and propose a potential alternative approach utilizing counterfactual explanations.-
dc.format.extent7-
dc.language영어-
dc.language.isoENG-
dc.publisherIEEE-
dc.titleCan We Trust the Actionable Guidance from Explainable AI Techniques in Defect Prediction?-
dc.typeArticle-
dc.identifier.doi10.1109/SANER64311.2025.00051-
dc.identifier.scopusid2-s2.0-105007294522-
dc.identifier.wosid001506888600043-
dc.identifier.bibliographicCitationIEEE International Conference on Software Analysis, Evolution and Reengineering, pp 476 - 482-
dc.citation.titleIEEE International Conference on Software Analysis, Evolution and Reengineering-
dc.citation.startPage476-
dc.citation.endPage482-
dc.type.docTypeProceedings Paper-
dc.description.isOpenAccessN-
dc.description.journalRegisteredClassscie-
dc.description.journalRegisteredClassscopus-
dc.relation.journalResearchAreaComputer Science-
dc.relation.journalWebOfScienceCategoryComputer Science, Interdisciplinary Applications-
dc.relation.journalWebOfScienceCategoryComputer Science, Software Engineering-
dc.relation.journalWebOfScienceCategoryComputer Science, Theory & Methods-
dc.subject.keywordAuthorActionable Analytics-
dc.subject.keywordAuthorExplainable AI-
dc.subject.keywordAuthorModel-Agnostic Techniques-
dc.subject.keywordAuthorSoftware Defect Prediction-
dc.identifier.urlhttps://conf.researchr.org/details/saner-2025/saner-2025-papers/20/Can-We-Trust-the-Actionable-Guidance-from-Explainable-AI-Techniques-in-Defect-Predict-
Files in This Item
Go to Link
Appears in
Collections
COLLEGE OF COMPUTING > ERICA 컴퓨터학부 > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Lee, Scott Uk Jin photo

Lee, Scott Uk Jin
ERICA 소프트웨어융합대학 (ERICA 컴퓨터학부)
Read more

Altmetrics

Total Views & Downloads

BROWSE