Detailed Information

Cited 0 time in webofscience Cited 1 time in scopus
Metadata Downloads

Cross-Modal Cortical Activity in the Brain Can Predict Cochlear Implantation Outcome in Adults: A Machine Learning Studyopen access

Authors
Kyong, Jeong-SugSuh, Myung-WhanHan, Jae JoonPark, Moo KyunNoh, Tae SooOh, Seung HaLee, Jun Ho
Issue Date
Sep-2021
Publisher
Mediterranean Society of Otology and Audiology (MSOA)
Keywords
CI outcome; cross-modal plasticity; predicting factor; electroencephalography; machine-learning; tactile
Citation
Journal of International Advanced Otology, v.17, no.5, pp 380 - 386
Pages
7
Journal Title
Journal of International Advanced Otology
Volume
17
Number
5
Start Page
380
End Page
386
URI
https://scholarworks.bwise.kr/sch/handle/2021.sw.sch/20905
DOI
10.5152/iao.2021.9337
ISSN
1308-7649
2148-3817
Abstract
OBJECTIVES: Prediction of cochlear implantation (CI) outcome is often difficult because outcomes vary among patients. Though the brain plasticity across modalities during deafness is associated with individual CI outcomes, longitudinal observations in multiple patients are scarce. Therefore, we sought a prediction system based on cross-modal plasticity in a longitudinal study with multiple patients. METHODS: Classification of CI outcomes between excellent or poor was tested based on the features of brain cross-modal plasticity, measured using event-related responses and their corresponding electromagnetic sources. A machine learning estimation model was applied to 13 datasets from 3 patients based on linear supervised training. Classification efficiency was evaluated comparing prediction accuracy, sensitivity/specificity, total mis- classification cost, and training time among feature set conditions. RESULTS: Combined feature sets with the sensor and source levels dramatically improved classification accuracy between excellent and poor outcomes. Specifically, the tactile feature set best explained CI outcome (accuracy, 98.83 +/- 2.57%; sensitivity, 98.00 +/- 0.01%; specificity, 98.15 +/- 4.26%; total misclassification cost, 0.17 +/- 0.38; training time, 0.51 +/- 0.09 sec), followed by the visual feature (accuracy, 93.50 +/- 4.89%; sensitivity, 89.17 +/- 8.16%; specificity, 98.00 +/- 0.01%; total misclassification cost, 0.65 +/- 0.49; training time, 0.38 +/- 0.50 sec). CONCLUSION: Individual tactile and visual processing in the brain best classified the current status when classified by combined sensor-source level features. Our results suggest that cross-modal brain plasticity due to deafness may provide a basis for classifying the status. We expect this novel method to contribute to the evaluation and prediction of CI outcomes.
Files in This Item
There are no files associated with this item.
Appears in
Collections
College of Medicine > Department of Otorhinolaryngology > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Altmetrics

Total Views & Downloads

BROWSE