Detailed Information

Cited 56 time in webofscience Cited 71 time in scopus
Metadata Downloads

Spotting malignancies from gastric endoscopic images using deep learning

Authors
Lee, Jang HyungKim, Young JaeKim, Yoon WooPark, SungjinChoi, Youn-iKim, Yoon JaePark, Dong KyunKim, Kwang GiChung, Jun-Won
Issue Date
Nov-2019
Publisher
SPRINGER
Keywords
Gastrointestinal malignancy; Endoscopy; Ulcer; Cancer; Deep learning; Neural network; ResNet
Citation
SURGICAL ENDOSCOPY AND OTHER INTERVENTIONAL TECHNIQUES, v.33, no.11, pp.3790 - 3797
Journal Title
SURGICAL ENDOSCOPY AND OTHER INTERVENTIONAL TECHNIQUES
Volume
33
Number
11
Start Page
3790
End Page
3797
URI
https://scholarworks.bwise.kr/gachon/handle/2020.sw.gachon/885
DOI
10.1007/s00464-019-06677-2
ISSN
0930-2794
Abstract
Background Gastric cancer is a common kind of malignancies, with yearly occurrences exceeding one million worldwide in 2017. Typically, ulcerous and cancerous tissues develop abnormal morphologies through courses of progression. Endoscopy is a routinely adopted means for examination of gastrointestinal tract for malignancy. Early and timely detection of malignancy closely correlate with good prognosis. Repeated presentation of similar frames from gastrointestinal tract endoscopy often weakens attention for practitioners to result in true patients missed out to incur higher medical cost and unnecessary morbidity. Highly needed is an automatic means for spotting visual abnormality and prompts for attention for medical staff for more thorough examination. Methods We conduct classification of benign ulcer and cancer for gastrointestinal endoscopic color images using deep neural network and transfer-learning approach. Using clinical data gathered from Gil Hospital, we built a dataset comprised of 200 normal, 367 cancer, and 220 ulcer cases, and applied the inception, ResNet, and VGGNet models pretrained on ImageNet. Three classes were defined-normal, benign ulcer, and cancer, and three separate binary classifiers were built-those for normal vs cancer, normal vs ulcer, and cancer vs ulcer for the corresponding classification tasks. For each task, considering inherent randomness entailed in the deep learning process, we performed data partitioning and model building experiments 100 times and averaged the performance values. Results Areas under curves of respective receiver operating characteristics were 0.95, 0.97, and 0.85 for the three classifiers. The ResNet showed the highest level of performance. The cases involving normal, i.e., normal vs ulcer and normal vs cancer resulted in accuracies above 90%. The case of ulcer vs cancer classification resulted in a lower accuracy of 77.1%, possibly due to smaller difference in appearance than those cases involving normal. Conclusions The overall level of performance of the proposed method was very promising to encourage applications in clinical environments. Automatic classification using deep learning technique as proposed can be used to complement manual inspection efforts for practitioners to minimize dangers of missed out positives resulting from repetitive sequence of endoscopic frames and weakening attentions.
Files in This Item
There are no files associated with this item.
Appears in
Collections
의과대학 > 의학과 > 1. Journal Articles
보건과학대학 > 의용생체공학과 > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Kim, Yoon Jae photo

Kim, Yoon Jae
College of Medicine (Department of Medicine)
Read more

Altmetrics

Total Views & Downloads

BROWSE