Detailed Information

Cited 0 time in webofscience Cited 13 time in scopus
Metadata Downloads

Interpretive Performance and Inter-Observer Agreement on Digital Mammography Test Sets

Authors
Kim, Sung HunLee, Eun HyeJun, Jae KwanKim, You MeChang, Yun-WooLee, Jin HwaKim, Hye-WonChoi, Eun Jung
Issue Date
Feb-2019
Publisher
대한영상의학회
Keywords
Screening; Medical audit; Radiologists; Observer variation; Sensitivity and specificity
Citation
Korean Journal of Radiology, v.20, no.2, pp 218 - 224
Pages
7
Journal Title
Korean Journal of Radiology
Volume
20
Number
2
Start Page
218
End Page
224
URI
https://scholarworks.bwise.kr/sch/handle/2021.sw.sch/4775
DOI
10.3348/kjr.2018.0193
ISSN
1229-6929
2005-8330
Abstract
Objective: To evaluate the interpretive performance and inter-observer agreement on digital mammographs among radiologists and to investigate whether radiologist characteristics affect performance and agreement. Materials and Methods: The test sets consisted of full-field digital mammograms and contained 12 cancer cases among 1000 total cases. Twelve radiologists independently interpreted all mammograms. Performance indicators included the recall rate, cancer detection rate (CDR), positive predictive value (PPV), sensitivity, specificity, false positive rate (FPR), and area under the receiver operating characteristic curve (AUC). Inter-radiologist agreement was measured. The reporting radiologist characteristics included number of years of experience interpreting mammography, fellowship training in breast imaging, and annual volume of mammography interpretation. Results: The mean and range of interpretive performance were as follows: recall rate, 7.5% (3.3-10.2%); CDR, 10.6 (8.0-12.0 per 1000 examinations); PPV, 15.9% (8.8-33.3%); sensitivity, 88.2% (66.7-100%); specificity, 93.5% (90.6-97.8%); FPR, 6.5% (2.2-9.4%); and AUC, 0.93 (0.82-0.99). Radiologists who annually interpreted more than 3000 screening mammograms tended to exhibit higher CDRs and sensitivities than those who interpreted fewer than 3000 mammograms (p = 0.064). The inter-radiologist agreement showed a percent agreement of 77.2-88.8% and a kappa value of 0.27-0.34. Radiologist characteristics did not affect agreement. Conclusion: The interpretative performance of the radiologists fulfilled the mammography screening goal of the American College of Radiology, although there was inter-observer variability. Radiologists who interpreted more than 3000 screening mammograms annually tended to perform better than radiologists who did not.
Files in This Item
There are no files associated with this item.
Appears in
Collections
College of Medicine > Department of Radiology > 1. Journal Articles
College of Medicine > Department of Radiology > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Lee, EUN HYE photo

Lee, EUN HYE
College of Medicine (Department of Radiology)
Read more

Altmetrics

Total Views & Downloads

BROWSE