TY - JOUR A1 - Al Hajj, Hassan A1 - Sahu, Manish A1 - Lamard, Mathieu A1 - Conze, Pierre-Henri A1 - Roychowdhury, Soumali A1 - Hu, Xiaowei A1 - Marsalkaite, Gabija A1 - Zisimopoulos, Odysseas A1 - Dedmari, Muneer Ahmad A1 - Zhao, Fenqiang A1 - Prellberg, Jonas A1 - Galdran, Adrian A1 - Araujo, Teresa A1 - Vo, Duc My A1 - Panda, Chandan A1 - Dahiya, Navdeep A1 - Kondo, Satoshi A1 - Bian, Zhengbing A1 - Bialopetravicius, Jonas A1 - Qiu, Chenghui A1 - Dill, Sabrina A1 - Mukhopadyay, Anirban A1 - Costa, Pedro A1 - Aresta, Guilherme A1 - Ramamurthy, Senthil A1 - Lee, Sang-Woong A1 - Campilho, Aurelio A1 - Zachow, Stefan A1 - Xia, Shunren A1 - Conjeti, Sailesh A1 - Armaitis, Jogundas A1 - Heng, Pheng-Ann A1 - Vahdat, Arash A1 - Cochener, Beatrice A1 - Quellec, Gwenole T1 - CATARACTS: Challenge on Automatic Tool Annotation for cataRACT Surgery JF - Medical Image Analysis N2 - Surgical tool detection is attracting increasing attention from the medical image analysis community. The goal generally is not to precisely locate tools in images, but rather to indicate which tools are being used by the surgeon at each instant. The main motivation for annotating tool usage is to design efficient solutions for surgical workflow analysis, with potential applications in report generation, surgical training and even real-time decision support. Most existing tool annotation algorithms focus on laparoscopic surgeries. However, with 19 million interventions per year, the most common surgical procedure in the world is cataract surgery. The CATARACTS challenge was organized in 2017 to evaluate tool annotation algorithms in the specific context of cataract surgery. It relies on more than nine hours of videos, from 50 cataract surgeries, in which the presence of 21 surgical tools was manually annotated by two experts. With 14 participating teams, this challenge can be considered a success. As might be expected, the submitted solutions are based on deep learning. This paper thoroughly evaluates these solutions: in particular, the quality of their annotations are compared to that of human interpretations. Next, lessons learnt from the differential analysis of these solutions are discussed. We expect that they will guide the design of efficient surgery monitoring tools in the near future. Y1 - 2019 U6 - https://doi.org/10.1016/j.media.2018.11.008 N1 - Best paper award - Computer Graphics Night 2020 (TU Darmstadt) VL - 52 IS - 2 SP - 24 EP - 41 PB - Elsevier ER -