TY - JOUR A1 - Al Hajj, Hassan A1 - Sahu, Manish A1 - Lamard, Mathieu A1 - Conze, Pierre-Henri A1 - Roychowdhury, Soumali A1 - Hu, Xiaowei A1 - Marsalkaite, Gabija A1 - Zisimopoulos, Odysseas A1 - Dedmari, Muneer Ahmad A1 - Zhao, Fenqiang A1 - Prellberg, Jonas A1 - Galdran, Adrian A1 - Araujo, Teresa A1 - Vo, Duc My A1 - Panda, Chandan A1 - Dahiya, Navdeep A1 - Kondo, Satoshi A1 - Bian, Zhengbing A1 - Bialopetravicius, Jonas A1 - Qiu, Chenghui A1 - Dill, Sabrina A1 - Mukhopadyay, Anirban A1 - Costa, Pedro A1 - Aresta, Guilherme A1 - Ramamurthy, Senthil A1 - Lee, Sang-Woong A1 - Campilho, Aurelio A1 - Zachow, Stefan A1 - Xia, Shunren A1 - Conjeti, Sailesh A1 - Armaitis, Jogundas A1 - Heng, Pheng-Ann A1 - Vahdat, Arash A1 - Cochener, Beatrice A1 - Quellec, Gwenole T1 - CATARACTS: Challenge on Automatic Tool Annotation for cataRACT Surgery JF - Medical Image Analysis N2 - Surgical tool detection is attracting increasing attention from the medical image analysis community. The goal generally is not to precisely locate tools in images, but rather to indicate which tools are being used by the surgeon at each instant. The main motivation for annotating tool usage is to design efficient solutions for surgical workflow analysis, with potential applications in report generation, surgical training and even real-time decision support. Most existing tool annotation algorithms focus on laparoscopic surgeries. However, with 19 million interventions per year, the most common surgical procedure in the world is cataract surgery. The CATARACTS challenge was organized in 2017 to evaluate tool annotation algorithms in the specific context of cataract surgery. It relies on more than nine hours of videos, from 50 cataract surgeries, in which the presence of 21 surgical tools was manually annotated by two experts. With 14 participating teams, this challenge can be considered a success. As might be expected, the submitted solutions are based on deep learning. This paper thoroughly evaluates these solutions: in particular, the quality of their annotations are compared to that of human interpretations. Next, lessons learnt from the differential analysis of these solutions are discussed. We expect that they will guide the design of efficient surgery monitoring tools in the near future. Y1 - 2019 U6 - https://doi.org/10.1016/j.media.2018.11.008 N1 - Best paper award - Computer Graphics Night 2020 (TU Darmstadt) VL - 52 IS - 2 SP - 24 EP - 41 PB - Elsevier ER - TY - THES A1 - Sahu, Manish T1 - Instrument Gesture Recognition and Tracking for Effective Control of Laparoscopic Tracking and Guidance Device Y1 - 2016 ER - TY - THES A1 - Sahu, Manish T1 - Vision-based Context-awareness in Minimally Invasive Surgical Video Streams N2 - Surgical interventions are becoming increasingly complex thanks to modern assistance systems (imaging, robotics, etc.). Minimally invasive surgery in particular places high demands on surgeons due to added surgical complexity and information overload. Therefore, there is a growing need of developing context-aware systems that recognize the current surgical situation in order to derive and present the relevant information to the surgical staff for assistance. Current approaches for deriving contextual cues either utilize specialized hardware that is disruptive to the surgical workflow, or utilize vision-based approaches that require valuable time of surgeons, especially for manual annotations. The main objective of this cumulative dissertation is to improve the existing approaches for three important sub-problems of vision-based context-aware systems, namely surgical phase recognition, surgical instrument recognition and surgical instrument segmentation, while tackling the vision and manual annotation challenges related to these problems. This dissertation demonstrates that vision-based approaches for the three named clinical sub-problems of context-aware systems can be developed in an annotation-scarce setting by employing: domain-specific, deep learning based transfer learning techniques for the surgical instrument and phase recognition tasks; and deep learning based simulation-to-real unsupervised domain adaptation techniques for the surgical instrument segmentation task. The efficacy and real-time performance of the developed approaches have been evaluated on publicly available datasets containing real surgical videos (laparoscopic procedures) that were acquired in an uncontrolled surgical environment. These proposed approaches advance the state-of-the-art for the aforementioned research problems of context-aware systems in the OR and can potentially be utilized for real-time notification of the surgical phase, surgical instrument usage and image-based localization of surgical instruments. Y1 - 2022 ER - TY - GEN A1 - Sahu, Manish A1 - Dill, Sabrina A1 - Mukhopadyay, Anirban A1 - Zachow, Stefan T1 - Surgical Tool Presence Detection for Cataract Procedures N2 - This article outlines the submission to the CATARACTS challenge for automatic tool presence detection [1]. Our approach for this multi-label classification problem comprises labelset-based sampling, a CNN architecture and temporal smothing as described in [3], which we call ZIB-Res-TS. T3 - ZIB-Report - 18-28 Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-69110 SN - 1438-0064 ER - TY - JOUR A1 - Sahu, Manish A1 - Moerman, Daniil A1 - Mewes, Philip A1 - Mountney, Peter A1 - Rose, Georg T1 - Instrument State Recognition and Tracking for Effective Control of Robotized Laparoscopic Systems JF - International Journal of Mechanical Engineering and Robotics Research N2 - Surgical robots are an important component for delivering advanced paradigm shifting technology such as image guided surgery and navigation. However, for robotic systems to be readily adopted into the operating room they must be easy and convenient to control and facilitate a smooth surgical workflow. In minimally invasive surgery, the laparoscope may be held by a robot but controlling and moving the laparoscope remains challenging. It is disruptive to the workflow for the surgeon to put down the tools to move the robot in particular for solo surgery approaches. This paper proposes a novel approach for naturally controlling the robot mounted laparoscope’s position by detecting a surgical grasping tool and recognizing if its state is open or close. This approach does not require markers or fiducials and uses a machine learning framework for tool and state recognition which exploits naturally occurring visual cues. Furthermore a virtual user interface on the laparoscopic image is proposed that uses the surgical tool as a pointing device to overcome common problems in depth perception. Instrument detection and state recognition are evaluated on in-vivo and ex-vivo porcine datasets. To demonstrate the practical surgical application and real time performance the system is validated in a simulated surgical environment. Y1 - 2016 U6 - https://doi.org/10.18178/ijmerr.5.1.33-38 VL - 5 IS - 1 SP - 33 EP - 38 ER - TY - JOUR A1 - Sahu, Manish A1 - Mukhopadhyay, Anirban A1 - Szengel, Angelika A1 - Zachow, Stefan T1 - Addressing multi-label imbalance problem of Surgical Tool Detection using CNN JF - International Journal of Computer Assisted Radiology and Surgery N2 - Purpose: A fully automated surgical tool detection framework is proposed for endoscopic video streams. State-of-the-art surgical tool detection methods rely on supervised one-vs-all or multi-class classification techniques, completely ignoring the co-occurrence relationship of the tools and the associated class imbalance. Methods: In this paper, we formulate tool detection as a multi-label classification task where tool co-occurrences are treated as separate classes. In addition, imbalance on tool co-occurrences is analyzed and stratification techniques are employed to address the imbalance during Convolutional Neural Network (CNN) training. Moreover, temporal smoothing is introduced as an online post-processing step to enhance run time prediction. Results: Quantitative analysis is performed on the M2CAI16 tool detection dataset to highlight the importance of stratification, temporal smoothing and the overall framework for tool detection. Conclusion: The analysis on tool imbalance, backed by the empirical results indicates the need and superiority of the proposed framework over state-of-the-art techniques. Y1 - 2017 UR - https://link.springer.com/article/10.1007/s11548-017-1565-x U6 - https://doi.org/10.1007/s11548-017-1565-x N1 - Selected for final oral presentation VL - 12 IS - 6 SP - 1013 EP - 1020 PB - Springer ER - TY - JOUR A1 - Sahu, Manish A1 - Mukhopadhyay, Anirban A1 - Zachow, Stefan T1 - Simulation-to-Real domain adaptation with teacher-student learning for endoscopic instrument segmentation JF - International Journal of Computer Assisted Radiology and Surgery N2 - Purpose Segmentation of surgical instruments in endoscopic video streams is essential for automated surgical scene understanding and process modeling. However, relying on fully supervised deep learning for this task is challenging because manual annotation occupies valuable time of the clinical experts. Methods We introduce a teacher–student learning approach that learns jointly from annotated simulation data and unlabeled real data to tackle the challenges in simulation-to-real unsupervised domain adaptation for endoscopic image segmentation. Results Empirical results on three datasets highlight the effectiveness of the proposed framework over current approaches for the endoscopic instrument segmentation task. Additionally, we provide analysis of major factors affecting the performance on all datasets to highlight the strengths and failure modes of our approach. Conclusions We show that our proposed approach can successfully exploit the unlabeled real endoscopic video frames and improve generalization performance over pure simulation-based training and the previous state-of-the-art. This takes us one step closer to effective segmentation of surgical instrument in the annotation scarce setting. Y1 - 2021 U6 - https://doi.org/10.1007/s11548-021-02383-4 N1 - Honorary Mention: Machine Learning for Computer-Assisted Intervention (CAI) Award @IPCAI2021 N1 - Honorary Mention: Audience Award for Best Innovation @IPCAI2021 VL - 16 SP - 849 EP - 859 PB - Springer Nature ER - TY - CHAP A1 - Sahu, Manish A1 - Strömsdörfer, Ronja A1 - Mukhopadhyay, Anirban A1 - Zachow, Stefan T1 - Endo-Sim2Real: Consistency learning-based domain adaptation for instrument segmentation T2 - Proc. Medical Image Computing and Computer Assisted Intervention (MICCAI), Part III N2 - Surgical tool segmentation in endoscopic videos is an important component of computer assisted interventions systems. Recent success of image-based solutions using fully-supervised deep learning approaches can be attributed to the collection of big labeled datasets. However, the annotation of a big dataset of real videos can be prohibitively expensive and time consuming. Computer simulations could alleviate the manual labeling problem, however, models trained on simulated data do not generalize to real data. This work proposes a consistency-based framework for joint learning of simulated and real (unlabeled) endoscopic data to bridge this performance generalization issue. Empirical results on two data sets (15 videos of the Cholec80 and EndoVis'15 dataset) highlight the effectiveness of the proposed Endo-Sim2Real method for instrument segmentation. We compare the segmentation of the proposed approach with state-of-the-art solutions and show that our method improves segmentation both in terms of quality and quantity. Y1 - 2020 U6 - https://doi.org/https://doi.org/10.1007/978-3-030-59716-0_75 VL - 12263 PB - Springer Nature ER - TY - JOUR A1 - Sahu, Manish A1 - Szengel, Angelika A1 - Mukhopadhyay, Anirban A1 - Zachow, Stefan T1 - Surgical phase recognition by learning phase transitions JF - Current Directions in Biomedical Engineering (CDBME) N2 - Automatic recognition of surgical phases is an important component for developing an intra-operative context-aware system. Prior work in this area focuses on recognizing short-term tool usage patterns within surgical phases. However, the difference between intra- and inter-phase tool usage patterns has not been investigated for automatic phase recognition. We developed a Recurrent Neural Network (RNN), in particular a state-preserving Long Short Term Memory (LSTM) architecture to utilize the long-term evolution of tool usage within complete surgical procedures. For fully automatic tool presence detection from surgical video frames, a Convolutional Neural Network (CNN) based architecture namely ZIBNet is employed. Our proposed approach outperformed EndoNet by 8.1% on overall precision for phase detection tasks and 12.5% on meanAP for tool recognition tasks. Y1 - 2020 U6 - https://doi.org/https://doi.org/10.1515/cdbme-2020-0037 N1 - Nomination for the Best-Paper Award VL - 6 IS - 1 SP - 20200037 PB - De Gruyter ER - TY - GEN A1 - Sahu, Manish A1 - Szengel, Angelika A1 - Mukhopadhyay, Anirban A1 - Zachow, Stefan T1 - Analyzing laparoscopic cholecystectomy with deep learning: automatic detection of surgical tools and phases T2 - 28th International Congress of the European Association for Endoscopic Surgery (EAES) N2 - Motivation: The ever-rising volume of patients, high maintenance cost of operating rooms and time consuming analysis of surgical skills are fundamental problems that hamper the practical training of the next generation of surgeons. The hospitals prefer to keep the surgeons busy in real operations over training young surgeons for obvious economic reasons. One fundamental need in surgical training is the reduction of the time needed by the senior surgeon to review the endoscopic procedures performed by the young surgeon while minimizing the subjective bias in evaluation. The unprecedented performance of deep learning ushers the new age of data-driven automatic analysis of surgical skills. Method: Deep learning is capable of efficiently analyzing thousands of hours of laparoscopic video footage to provide an objective assessment of surgical skills. However, the traditional end-to-end setting of deep learning (video in, skill assessment out) is not explainable. Our strategy is to utilize the surgical process modeling framework to divide the surgical process into understandable components. This provides the opportunity to employ deep learning for superior yet automatic detection and evaluation of several aspects of laparoscopic cholecystectomy such as surgical tool and phase detection. We employ ZIBNet for the detection of surgical tool presence. ZIBNet employs pre-processing based on tool usage imbalance, a transfer learned 50-layer residual network (ResNet-50) and temporal smoothing. To encode the temporal evolution of tool usage (over the entire video sequence) that relates to the surgical phases, Long Short Term Memory (LSTM) units are employed with long-term dependency. Dataset: We used CHOLEC 80 dataset that consists of 80 videos of laparoscopic cholecystectomy performed by 13 surgeons, divided equally for training and testing. In these videos, up to three different tools (among 7 types of tools) can be present in a frame. Results: The mean average precision of the detection of all tools is 93.5 ranging between 86.8 and 99.3, a significant improvement (p <0.01) over the previous state-of-the-art. We observed that less frequent tools like Scissors, Irrigator, Specimen Bag etc. are more related to phase transitions. The overall precision (recall) of the detection of all surgical phases is 79.6 (81.3). Conclusion: While this is not the end goal for surgical skill analysis, the development of such a technological platform is essential toward a data-driven objective understanding of surgical skills. In future, we plan to investigate surgeon-in-the-loop analysis and feedback for surgical skill analysis. Y1 - 2020 UR - https://academy.eaes.eu/eaes/2020/28th/298882/manish.sahu.analyzing.laparoscopic.cholecystectomy.with.deep.learning.html?f=listing%3D0%2Abrowseby%3D8%2Asortby%3D2 ER -