Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Toward better ear disease diagnosis: A multi-modal multi-fusion model using endoscopic images of the tympanic membrane and pure-tone audiometryopen access

Authors
Kim, TaewanKim, SangyeopKim, JaeyoungLee, YeonjoonChoi, June
Issue Date
Oct-2023
Publisher
Institute of Electrical and Electronics Engineers Inc.
Keywords
Artificial intelligence; Artificial intelligence; Auditory system; Biomedical imaging; Biomedical imaging; Bones; Classification algorithms; Classification algorithms; Computer aided diagnosis; Computer aided diagnosis; Convolutional neural networks; Convolutional neural networks; Data models; Deep learning; Deep learning; Diseases; Ear; Electronic medical records; Electronic medical records; Media
Citation
IEEE Access, v.11, pp.116721 - 116731
Indexed
SCIE
SCOPUS
Journal Title
IEEE Access
Volume
11
Start Page
116721
End Page
116731
URI
https://scholarworks.bwise.kr/erica/handle/2021.sw.erica/115434
DOI
10.1109/ACCESS.2023.3325346
ISSN
2169-3536
Abstract
Chronic otitis media is characterized by recurrent infections, leading to serious complications, such as meningitis, facial palsy, and skull base osteomyelitis. Therefore, active treatment based on early diagnosis is essential. This study developed a multi-modal multi-fusion (MMMF) model that automatically diagnoses ear diseases by applying endoscopic images of the tympanic membrane (TM) and pure-tone audiometry (PTA) data to a deep learning model. The primary aim of the proposed MMMF model is adding "normal with hearing loss" as a category, and improving the diagnostic accuracy of the conventional four ear diseases: normal, TM perforation, retraction, and cholesteatoma. To this end, the MMMF model was trained on 1,480 endoscopic images of the TM and PTA data to distinguish five ear disease states: normal, TM perforation, retraction, cholesteatoma, and normal (hearing loss). It employs a feature fusion strategy of cross-attention, concatenation, and gated multi-modal units in a multi-modal architecture encompassing a convolutional neural network (CNN) and multi-layer perceptron. We expanded the classification capability to include an additional category, normal (hearing loss), thereby enhancing the diagnostic performance of extant ear disease classification. The MMMF model demonstrated superior performance when implemented with EfficientNet-B7, achieving 92.9% accuracy and 90.9% recall, thereby outpacing the existing feature fusion methods. In addition, five-fold cross-validation experiments were conducted, in which the model consistently demonstrated robust performance when endoscopic images of the TM and PTA data were applied to the deep learning model across all datasets. The proposed MMMF model is the first to include a category of normal ear disease state with hearing loss. The developed model demonstrated superior performance compared to existing CNN models and feature fusion methods. Consequently, this study substantiates the utility of simultaneously applying PTA data and endoscopic images of the TM for the automated diagnosis of ear diseases in clinical settings and validates the usefulness of the multi-fusion method. Author
Files in This Item
Go to Link
Appears in
Collections
COLLEGE OF COMPUTING > ERICA 컴퓨터학부 > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Lee, Yeon joon photo

Lee, Yeon joon
ERICA 소프트웨어융합대학 (ERICA 컴퓨터학부)
Read more

Altmetrics

Total Views & Downloads

BROWSE