Detailed Information

Cited 39 time in webofscience Cited 54 time in scopus
Metadata Downloads

CATARACTS: Challenge on automatic tool annotation for cataRACT surgery

Authors
Al Hajj, HassanLamard, MathieuConze, Pierre-HenriRoychowdhury, SoumaliHu, XiaoweiMarsalkaite, GabijaZisimopoulos, OdysseasDedmari, Muneer AhmadZhao, FenqiangPrellberg, JonasSahu, ManishGaldran, AdrianAraujo, TeresaDuc My VoPanda, ChandanDahiya, NavdeepKondo, SatoshiBian, ZhengbingVandat, ArashBialopetravicius, JonasFlouty, EvangelloQiu, ChenhuiDill, SabrinaMukhopadhyay, AnirbanCosta, PedroAresta, GuilhermeRamamurthys, SenthilLee, Sang-WoongCampilho, AurelioZachow, StefanXia, ShunrenConjeti, SaileshStoyanov, DanailArmaitis, JogundasHeng, Pheng-AnnMacready, William G.Cochener, BeatriceQuellec, Gwenole
Issue Date
Feb-2019
Publisher
ELSEVIER
Keywords
Cataract surgery; Video analysis; Deep learning; Challenge
Citation
MEDICAL IMAGE ANALYSIS, v.52, pp.24 - 41
Journal Title
MEDICAL IMAGE ANALYSIS
Volume
52
Start Page
24
End Page
41
URI
https://scholarworks.bwise.kr/gachon/handle/2020.sw.gachon/1895
DOI
10.1016/j.media.2018.11.008
ISSN
1361-8415
Abstract
Surgical tool detection is attracting increasing attention from the medical image analysis community. The goal generally is not to precisely locate tools in images, but rather to indicate which tools are being used by the surgeon at each instant. The main motivation for annotating tool usage is to design efficient solutions for surgical workflow analysis, with potential applications in report generation, surgical training and even real-time decision support. Most existing tool annotation algorithms focus on laparoscopic surgeries. However, with 19 million interventions per year, the most common surgical procedure in the world is cataract surgery. The CATARACTS challenge was organized in 2017 to evaluate tool annotation algorithms in the specific context of cataract surgery. It relies on more than nine hours of videos, from 50 cataract surgeries, in which the presence of 21 surgical tools was manually annotated by two experts. With 14 participating teams, this challenge can be considered a success. As might be expected, the submitted solutions are based on deep learning. This paper thoroughly evaluates these solutions: in particular, the quality of their annotations are compared to that of human interpretations. Next, lessons learnt from the differential analysis of these solutions are discussed. We expect that they will guide the design of efficient surgery monitoring tools in the near future. (C) 2018 Elsevier B.V. All rights reserved.
Files in This Item
There are no files associated with this item.
Appears in
Collections
IT융합대학 > 소프트웨어학과 > 1. Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Lee, Sang-Woong photo

Lee, Sang-Woong
College of IT Convergence (Department of Software)
Read more

Altmetrics

Total Views & Downloads

BROWSE