TY - INPR A1 - Allan, Max A1 - Kondo, Satoshi A1 - Bodenstedt, Sebastian A1 - Leger, Stefan A1 - Kadkhodamohammadi, Rahim A1 - Luengo, Imanol A1 - Fuentes, Felix A1 - Flouty, Evangello A1 - Mohammed, Ahmed A1 - Pedersen, Marius A1 - Kori, Avinash A1 - Alex, Varghese A1 - Krishnamurthi, Ganapathy A1 - Rauber, David A1 - Mendel, Robert A1 - Palm, Christoph A1 - Bano, Sophia A1 - Saibro, Guinther A1 - Shih, Chi-Sheng A1 - Chiang, Hsun-An A1 - Zhuang, Juntang A1 - Yang, Junlin A1 - Iglovikov, Vladimir A1 - Dobrenkii, Anton A1 - Reddiboina, Madhu A1 - Reddy, Anubhav A1 - Liu, Xingtong A1 - Gao, Cong A1 - Unberath, Mathias A1 - Kim, Myeonghyeon A1 - Kim, Chanho A1 - Kim, Chaewon A1 - Kim, Hyejin A1 - Lee, Gyeongmin A1 - Ullah, Ihsan A1 - Luna, Miguel A1 - Park, Sang Hyun A1 - Azizian, Mahdi A1 - Stoyanov, Danail A1 - Maier-Hein, Lena A1 - Speidel, Stefanie T1 - 2018 Robotic Scene Segmentation Challenge N2 - In 2015 we began a sub-challenge at the EndoVis workshop at MICCAI in Munich using endoscope images of exvivo tissue with automatically generated annotations from robot forward kinematics and instrument CAD models. However, the limited background variation and simple motion rendered the dataset uninformative in learning about which techniques would be suitable for segmentation in real surgery. In 2017, at the same workshop in Quebec we introduced the robotic instrument segmentation dataset with 10 teams participating in the challenge to perform binary, articulating parts and type segmentation of da Vinci instruments. This challenge included realistic instrument motion and more complex porcine tissue as background and was widely addressed with modfications on U-Nets and other popular CNN architectures [1]. In 2018 we added to the complexity by introducing a set of anatomical objects and medical devices to the segmented classes. To avoid over-complicating the challenge, we continued with porcine data which is dramatically simpler than human tissue due to the lack of fatty tissue occluding many organs. KW - Minimally invasive surgery KW - Robotic KW - Minimal-invasive Chirurgie KW - Robotik Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:898-opus4-50049 UR - https://arxiv.org/abs/2001.11190 ER - TY - GEN A1 - Mendel, Robert A1 - De Souza Jr., Luis Antonio A1 - Rauber, David A1 - Papa, João Paulo A1 - Palm, Christoph T1 - Abstract: Semi-supervised Segmentation Based on Error-correcting Supervision T2 - Bildverarbeitung für die Medizin 2021. Proceedings, German Workshop on Medical Image Computing, Regensburg, March 7-9, 2021 N2 - Pixel-level classification is an essential part of computer vision. For learning from labeled data, many powerful deep learning models have been developed recently. In this work, we augment such supervised segmentation models by allowing them to learn from unlabeled data. Our semi-supervised approach, termed Error-Correcting Supervision, leverages a collaborative strategy. Apart from the supervised training on the labeled data, the segmentation network is judged by an additional network. KW - Deep Learning Y1 - 2021 SN - 978-3-658-33197-9 U6 - https://doi.org/10.1007/978-3-658-33198-6_43 SP - 178 PB - Springer Vieweg CY - Wiesbaden ER - TY - JOUR A1 - Römmele, Christoph A1 - Mendel, Robert A1 - Barrett, Caroline A1 - Kiesl, Hans A1 - Rauber, David A1 - Rückert, Tobias A1 - Kraus, Lisa A1 - Heinkele, Jakob A1 - Dhillon, Christine A1 - Grosser, Bianca A1 - Prinz, Friederike A1 - Wanzl, Julia A1 - Fleischmann, Carola A1 - Nagl, Sandra A1 - Schnoy, Elisabeth A1 - Schlottmann, Jakob A1 - Dellon, Evan S. A1 - Messmann, Helmut A1 - Palm, Christoph A1 - Ebigbo, Alanna T1 - An artificial intelligence algorithm is highly accurate for detecting endoscopic features of eosinophilic esophagitis JF - Scientific Reports N2 - The endoscopic features associated with eosinophilic esophagitis (EoE) may be missed during routine endoscopy. We aimed to develop and evaluate an Artificial Intelligence (AI) algorithm for detecting and quantifying the endoscopic features of EoE in white light images, supplemented by the EoE Endoscopic Reference Score (EREFS). An AI algorithm (AI-EoE) was constructed and trained to differentiate between EoE and normal esophagus using endoscopic white light images extracted from the database of the University Hospital Augsburg. In addition to binary classification, a second algorithm was trained with specific auxiliary branches for each EREFS feature (AI-EoE-EREFS). The AI algorithms were evaluated on an external data set from the University of North Carolina, Chapel Hill (UNC), and compared with the performance of human endoscopists with varying levels of experience. The overall sensitivity, specificity, and accuracy of AI-EoE were 0.93 for all measures, while the AUC was 0.986. With additional auxiliary branches for the EREFS categories, the AI algorithm (AI-EoEEREFS) performance improved to 0.96, 0.94, 0.95, and 0.992 for sensitivity, specificity, accuracy, and AUC, respectively. AI-EoE and AI-EoE-EREFS performed significantly better than endoscopy beginners and senior fellows on the same set of images. An AI algorithm can be trained to detect and quantify endoscopic features of EoE with excellent performance scores. The addition of the EREFS criteria improved the performance of the AI algorithm, which performed significantly better than endoscopists with a lower or medium experience level. KW - Artificial Intelligence KW - Smart Endoscopy KW - eosinophilic esophagitis Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:898-opus4-46928 VL - 12 PB - Nature Portfolio CY - London ER - TY - CHAP A1 - Rauber, David A1 - Mendel, Robert A1 - Scheppach, Markus W. A1 - Ebigbo, Alanna A1 - Messmann, Helmut A1 - Palm, Christoph T1 - Analysis of Celiac Disease with Multimodal Deep Learning T2 - Bildverarbeitung für die Medizin 2022: Proceedings, German Workshop on Medical Image Computing, Heidelberg, June 26-28, 2022 N2 - Celiac disease is an autoimmune disorder caused by gluten that results in an inflammatory response of the small intestine.We investigated whether celiac disease can be detected using endoscopic images through a deep learning approach. The results show that additional clinical parameters can improve the classification accuracy. In this work, we distinguished between healthy tissue and Marsh III, according to the Marsh score system. We first trained a baseline network to classify endoscopic images of the small bowel into these two classes and then augmented the approach with a multimodality component that took the antibody status into account. KW - Deep Learning KW - Endoscopy Y1 - 2022 U6 - https://doi.org/10.1007/978-3-658-36932-3_25 SP - 115 EP - 120 PB - Springer Vieweg CY - Wiesbaden ER - TY - GEN A1 - Scheppach, Markus W. A1 - Mendel, Robert A1 - Rauber, David A1 - Probst, Andreas A1 - Nagl, Sandra A1 - Römmele, Christoph A1 - Meinikheim, Michael A1 - Palm, Christoph A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - Artificial Intelligence (AI) improves endoscopists’ vessel detection during endoscopic submucosal dissection (ESD) T2 - Endoscopy N2 - Aims While AI has been successfully implemented in detecting and characterizing colonic polyps, its role in therapeutic endoscopy remains to be elucidated. Especially third space endoscopy procedures like ESD and peroral endoscopic myotomy (POEM) pose a technical challenge and the risk of operator-dependent complications like intraprocedural bleeding and perforation. Therefore, we aimed at developing an AI-algorithm for intraprocedural real time vessel detection during ESD and POEM. Methods A training dataset consisting of 5470 annotated still images from 59 full-length videos (47 ESD, 12 POEM) and 179681 unlabeled images was used to train a DeepLabV3+neural network with the ECMT semi-supervised learning method. Evaluation for vessel detection rate (VDR) and time (VDT) of 19 endoscopists with and without AI-support was performed using a testing dataset of 101 standardized video clips with 200 predefined blood vessels. Endoscopists were stratified into trainees and experts in third space endoscopy. Results The AI algorithm had a mean VDR of 93.5% and a median VDT of 0.32 seconds. AI support was associated with a statistically significant increase in VDR from 54.9% to 73.0% and from 59.0% to 74.1% for trainees and experts, respectively. VDT significantly decreased from 7.21 sec to 5.09 sec for trainees and from 6.10 sec to 5.38 sec for experts in the AI-support group. False positive (FP) readings occurred in 4.5% of frames. FP structures were detected significantly shorter than true positives (0.71 sec vs. 5.99 sec). Conclusions AI improved VDR and VDT of trainees and experts in third space endoscopy and may reduce performance variability during training. Further research is needed to evaluate the clinical impact of this new technology. Y1 - 2024 U6 - https://doi.org/10.1055/s-0044-1782891 VL - 56 IS - S 02 SP - S93 PB - Thieme CY - Stuttgart ER - TY - GEN A1 - Zellmer, Stephan A1 - Rauber, David A1 - Probst, Andreas A1 - Weber, Tobias A1 - Braun, Georg A1 - Römmele, Christoph A1 - Nagl, Sandra A1 - Schnoy, Elisabeth A1 - Messmann, Helmut A1 - Ebigbo, Alanna A1 - Palm, Christoph T1 - Artificial intelligence as a tool in the detection of the papillary ostium during ERCP T2 - Endoscopy N2 - Aims Endoscopic retrograde cholangiopancreaticography (ERCP) is the gold standard in the diagnosis as well as treatment of diseases of the pancreatobiliary tract. However, it is technically complex and has a relatively high complication rate. In particular, cannulation of the papillary ostium remains challenging. The aim of this study is to examine whether a deep-learning algorithm can be used to detect the major duodenal papilla and in particular the papillary ostium reliably and could therefore be a valuable tool for inexperienced endoscopists, particularly in training situation. Methods We analyzed a total of 654 retrospectively collected images of 85 patients. Both the major duodenal papilla and the ostium were then segmented. Afterwards, a neural network was trained using a deep-learning algorithm. A 5-fold cross-validation was performed. Subsequently, we ran the algorithm on 5 prospectively collected videos of ERCPs. Results 5-fold cross-validation on the 654 labeled data resulted in an F1 value of 0.8007, a sensitivity of 0.8409 and a specificity of 0.9757 for the class papilla, and an F1 value of 0.5724, a sensitivity of 0.5456 and a specificity of 0.9966 for the class ostium. Regardless of the class, the average F1 value (class papilla and class ostium) was 0.6866, the sensitivity 0.6933 and the specificity 0.9861. In 100% of cases the AI-detected localization of the papillary ostium in the prospectively collected videos corresponded to the localization of the cannulation performed by the endoscopist. Conclusions In the present study, the neural network was able to identify the major duodenal papilla with a high sensitivity and high specificity. In detecting the papillary ostium, the sensitivity was notably lower. However, when used on videos, the AI was able to identify the location of the subsequent cannulation with 100% accuracy. In the future, the neural network will be trained with more data. Thus, a suitable tool for ERCP could be established, especially in the training situation. Y1 - 2024 U6 - https://doi.org/10.1055/s-0044-1783138 VL - 56 IS - S 02 SP - S198 PB - Thieme CY - Stuttgart ER - TY - GEN A1 - Rückert, Tobias A1 - Rieder, Maximilian A1 - Rauber, David A1 - Xiao, Michel A1 - Humolli, Eg A1 - Feussner, Hubertus A1 - Wilhelm, Dirk A1 - Palm, Christoph T1 - Augmenting instrument segmentation in video sequences of minimally invasive surgery by synthetic smoky frames T2 - International Journal of Computer Assisted Radiology and Surgery KW - Surgical instrument segmentation KW - smoke simulation KW - unpaired image-to-image translation KW - robot-assisted surgery Y1 - 2023 U6 - https://doi.org/10.1007/s11548-023-02878-2 VL - 18 IS - Suppl 1 SP - S54 EP - S56 PB - Springer Nature ER - TY - CHAP A1 - Gutbrod, Max A1 - Geisler, Benedikt A1 - Rauber, David A1 - Palm, Christoph ED - Maier, Andreas ED - Deserno, Thomas M. ED - Handels, Heinz ED - Maier-Hein, Klaus ED - Palm, Christoph ED - Tolxdorff, Thomas T1 - Data Augmentation for Images of Chronic Foot Wounds T2 - Bildverarbeitung für die Medizin 2024: Proceedings, German Workshop on Medical Image Computing, March 10-12, 2024, Erlangen N2 - Training data for Neural Networks is often scarce in the medical domain, which often results in models that struggle to generalize and consequently showpoor performance on unseen datasets. Generally, adding augmentation methods to the training pipeline considerably enhances a model’s performance. Using the dataset of the Foot Ulcer Segmentation Challenge, we analyze two additional augmentation methods in the domain of chronic foot wounds - local warping of wound edges along with projection and blurring of shapes inside wounds. Our experiments show that improvements in the Dice similarity coefficient and Normalized Surface Distance metrics depend on a sensible selection of those augmentation methods. Y1 - 2024 U6 - https://doi.org/10.1007/978-3-658-44037-4_71 SP - 261 EP - 266 PB - Springer CY - Wiesbaden ER - TY - GEN A1 - Scheppach, Markus W. A1 - Rauber, David A1 - Mendel, Robert A1 - Palm, Christoph A1 - Byrne, Michael F. A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - Detection Of Celiac Disease Using A Deep Learning Algorithm T2 - Endoscopy N2 - Aims Celiac disease (CD) is a complex condition caused by an autoimmune reaction to ingested gluten. Due to its polymorphic manifestation and subtle endoscopic presentation, the diagnosis is difficult and thus the disorder is underreported. We aimed to use deep learning to identify celiac disease on endoscopic images of the small bowel. Methods Patients with small intestinal histology compatible with CD (MARSH classification I-III) were extracted retrospectively from the database of Augsburg University hospital. They were compared to patients with no clinical signs of CD and histologically normal small intestinal mucosa. In a first step MARSH III and normal small intestinal mucosa were differentiated with the help of a deep learning algorithm. For this, the endoscopic white light images were divided into five equal-sized subsets. We avoided splitting the images of one patient into several subsets. A ResNet-50 model was trained with the images from four subsets and then validated with the remaining subset. This process was repeated for each subset, such that each subset was validated once. Sensitivity, specificity, and harmonic mean (F1) of the algorithm were determined. Results The algorithm showed values of 0.83, 0.88, and 0.84 for sensitivity, specificity, and F1, respectively. Further data showing a comparison between the detection rate of the AI model and that of experienced endoscopists will be available at the time of the upcoming conference. Conclusions We present the first clinical report on the use of a deep learning algorithm for the detection of celiac disease using endoscopic images. Further evaluation on an external data set, as well as in the detection of CD in real-time, will follow. However, this work at least suggests that AI can assist endoscopists in the endoscopic diagnosis of CD, and ultimately may be able to do a true optical biopsy in live-time. KW - Celiac Disease KW - Deep Learning Y1 - 2021 U6 - https://doi.org/10.1055/s-0041-1724970 N1 - Digital poster exhibition VL - 53 IS - S 01 PB - Georg Thieme Verlag CY - Stuttgart ER - TY - JOUR A1 - Scheppach, Markus W. A1 - Rauber, David A1 - Stallhofer, Johannes A1 - Muzalyova, Anna A1 - Otten, Vera A1 - Manzeneder, Carolin A1 - Schwamberger, Tanja A1 - Wanzl, Julia A1 - Schlottmann, Jakob A1 - Tadic, Vidan A1 - Probst, Andreas A1 - Schnoy, Elisabeth A1 - Römmele, Christoph A1 - Fleischmann, Carola A1 - Meinikheim, Michael A1 - Miller, Silvia A1 - Märkl, Bruno A1 - Stallmach, Andreas A1 - Palm, Christoph A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - Detection of duodenal villous atrophy on endoscopic images using a deep learning algorithm JF - Gastrointestinal Endoscopy N2 - Background and aims Celiac disease with its endoscopic manifestation of villous atrophy is underdiagnosed worldwide. The application of artificial intelligence (AI) for the macroscopic detection of villous atrophy at routine esophagogastroduodenoscopy may improve diagnostic performance. Methods A dataset of 858 endoscopic images of 182 patients with villous atrophy and 846 images from 323 patients with normal duodenal mucosa was collected and used to train a ResNet 18 deep learning model to detect villous atrophy. An external data set was used to test the algorithm, in addition to six fellows and four board certified gastroenterologists. Fellows could consult the AI algorithm’s result during the test. From their consultation distribution, a stratification of test images into “easy” and “difficult” was performed and used for classified performance measurement. Results External validation of the AI algorithm yielded values of 90 %, 76 %, and 84 % for sensitivity, specificity, and accuracy, respectively. Fellows scored values of 63 %, 72 % and 67 %, while the corresponding values in experts were 72 %, 69 % and 71 %, respectively. AI consultation significantly improved all trainee performance statistics. While fellows and experts showed significantly lower performance for “difficult” images, the performance of the AI algorithm was stable. Conclusion In this study, an AI algorithm outperformed endoscopy fellows and experts in the detection of villous atrophy on endoscopic still images. AI decision support significantly improved the performance of non-expert endoscopists. The stable performance on “difficult” images suggests a further positive add-on effect in challenging cases. KW - celiac disease KW - villous atrophy KW - endoscopy detection KW - artificial intelligence Y1 - 2023 U6 - https://doi.org/10.1016/j.gie.2023.01.006 PB - Elsevier ER -