Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

A novel framework of multiclass skin lesion recognition from dermoscopic images using deep learning and explainable AIopen access

Authors
Ahmad, NaveedShah, Jamal HussainKhan, Muhammad AttiqueBaili, JamelAnsari, Ghulam JillaniTariq, UsmanKim, Ye JinCha, Jae-Hyuk
Issue Date
Jun-2023
Publisher
Frontiers Media S.A.
Keywords
deep features; dermoscopic images; explainable AI; feature selection; skin cancer
Citation
Frontiers in Oncology, v.13, pp.1 - 17
Indexed
SCIE
SCOPUS
Journal Title
Frontiers in Oncology
Volume
13
Start Page
1
End Page
17
URI
https://scholarworks.bwise.kr/hanyang/handle/2021.sw.hanyang/187513
DOI
10.3389/fonc.2023.1151257
ISSN
2234-943X
Abstract
Skin cancer is a serious disease that affects people all over the world. Melanoma is an aggressive form of skin cancer, and early detection can significantly reduce human mortality. In the United States, approximately 97,610 new cases of melanoma will be diagnosed in 2023. However, challenges such as lesion irregularities, low-contrast lesions, intraclass color similarity, redundant features, and imbalanced datasets make improved recognition accuracy using computerized techniques extremely difficult. This work presented a new framework for skin lesion recognition using data augmentation, deep learning, and explainable artificial intelligence. In the proposed framework, data augmentation is performed at the initial step to increase the dataset size, and then two pretrained deep learning models are employed. Both models have been fine-tuned and trained using deep transfer learning. Both models (Xception and ShuffleNet) utilize the global average pooling layer for deep feature extraction. The analysis of this step shows that some important information is missing; therefore, we performed the fusion. After the fusion process, the computational time was increased; therefore, we developed an improved Butterfly Optimization Algorithm. Using this algorithm, only the best features are selected and classified using machine learning classifiers. In addition, a GradCAM-based visualization is performed to analyze the important region in the image. Two publicly available datasets—ISIC2018 and HAM10000—have been utilized and obtained improved accuracy of 99.3% and 91.5%, respectively. Comparing the proposed framework accuracy with state-of-the-art methods reveals improved and less computational time.
Files in This Item
Appears in
Collections
서울 공과대학 > 서울 컴퓨터소프트웨어학부 > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Cha, Jae Hyuk photo

Cha, Jae Hyuk
COLLEGE OF ENGINEERING (SCHOOL OF COMPUTER SCIENCE)
Read more

Altmetrics

Total Views & Downloads

BROWSE