TY - JOUR A1 - Ebigbo, Alanna A1 - Mendel, Robert A1 - Scheppach, Markus W. A1 - Probst, Andreas A1 - Shahidi, Neal A1 - Prinz, Friederike A1 - Fleischmann, Carola A1 - Römmele, Christoph A1 - Gölder, Stefan Karl A1 - Braun, Georg A1 - Rauber, David A1 - Rückert, Tobias A1 - Souza Jr., Luis Antonio de A1 - Papa, João Paulo A1 - Byrne, Michael F. A1 - Palm, Christoph A1 - Messmann, Helmut T1 - Vessel and tissue recognition during third-space endoscopy using a deep learning algorithm JF - Gut N2 - In this study, we aimed to develop an artificial intelligence clinical decision support solution to mitigate operator-dependent limitations during complex endoscopic procedures such as endoscopic submucosal dissection and peroral endoscopic myotomy, for example, bleeding and perforation. A DeepLabv3-based model was trained to delineate vessels, tissue structures and instruments on endoscopic still images from such procedures. The mean cross-validated Intersection over Union and Dice Score were 63% and 76%, respectively. Applied to standardised video clips from third-space endoscopic procedures, the algorithm showed a mean vessel detection rate of 85% with a false-positive rate of 0.75/min. These performance statistics suggest a potential clinical benefit for procedure safety, time and also training. KW - Artificial Intelligence KW - Endoscopy KW - Medical Image Computing Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:898-opus4-54293 VL - 71 IS - 12 SP - 2388 EP - 2390 PB - BMJ CY - London ER - TY - JOUR A1 - Meinikheim, Michael A1 - Mendel, Robert A1 - Palm, Christoph A1 - Probst, Andreas A1 - Muzalyova, Anna A1 - Scheppach, Markus W. A1 - Nagl, Sandra A1 - Schnoy, Elisabeth A1 - Römmele, Christoph A1 - Schulz, Dominik Andreas Helmut Otto A1 - Schlottmann, Jakob A1 - Prinz, Friederike A1 - Rauber, David A1 - Rückert, Tobias A1 - Matsumura, Tomoaki A1 - Fernández-Esparrach, Glòria A1 - Parsa, Nasim A1 - Byrne, Michael F. A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - Influence of artificial intelligence on the diagnostic performance of endoscopists in the assessment of Barrett’s esophagus: a tandem randomized and video trial JF - Endoscopy N2 - Background This study evaluated the effect of an artificial intelligence (AI)-based clinical decision support system on the performance and diagnostic confidence of endoscopists in their assessment of Barrett’s esophagus (BE). Methods 96 standardized endoscopy videos were assessed by 22 endoscopists with varying degrees of BE experience from 12 centers. Assessment was randomized into two video sets: group A (review first without AI and second with AI) and group B (review first with AI and second without AI). Endoscopists were required to evaluate each video for the presence of Barrett’s esophagus-related neoplasia (BERN) and then decide on a spot for a targeted biopsy. After the second assessment, they were allowed to change their clinical decision and confidence level. Results AI had a stand-alone sensitivity, specificity, and accuracy of 92.2%, 68.9%, and 81.3%, respectively. Without AI, BE experts had an overall sensitivity, specificity, and accuracy of 83.3%, 58.1%, and 71.5%, respectively. With AI, BE nonexperts showed a significant improvement in sensitivity and specificity when videos were assessed a second time with AI (sensitivity 69.8% [95%CI 65.2%–74.2%] to 78.0% [95%CI 74.0%–82.0%]; specificity 67.3% [95%CI 62.5%–72.2%] to 72.7% [95%CI 68.2%–77.3%]). In addition, the diagnostic confidence of BE nonexperts improved significantly with AI. Conclusion BE nonexperts benefitted significantly from additional AI. BE experts and nonexperts remained significantly below the stand-alone performance of AI, suggesting that there may be other factors influencing endoscopists’ decisions to follow or discard AI advice. KW - Artificial Intelligence KW - Endoscopy KW - Medical Image Computing Y1 - 2024 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:898-opus4-72818 VL - 56 SP - 641 EP - 649 PB - Georg Thieme Verlag CY - Stuttgart ER - TY - GEN A1 - Ebigbo, Alanna A1 - Rauber, David A1 - Ayoub, Mousa A1 - Birzle, Lisa A1 - Matsumura, Tomoaki A1 - Probst, Andreas A1 - Steinbrück, Ingo A1 - Nagl, Sandra A1 - Römmele, Christoph A1 - Meinikheim, Michael A1 - Scheppach, Markus W. A1 - Palm, Christoph A1 - Messmann, Helmut T1 - Early Esophageal Cancer and the Generalizability of Artificial Intelligence T2 - Endoscopy N2 - Aims Artificial Intelligence (AI) systems in gastrointestinal endoscopy are narrow because they are trained to solve only one specific task. Unlike Narrow-AI, general AI systems may be able to solve multiple and unrelated tasks. We aimed to understand whether an AI system trained to detect, characterize, and segment early Barrett’s neoplasia (Barrett’s AI) is only capable of detecting this pathology or can also detect and segment other diseases like early squamous cell cancer (SCC). Methods 120 white light (WL) and narrow-band endoscopic images (NBI) from 60 patients (1 WL and 1 NBI image per patient) were extracted from the endoscopic database of the University Hospital Augsburg. Images were annotated by three expert endoscopists with extensive experience in the diagnosis and endoscopic resection of early esophageal neoplasias. An AI system based on DeepLabV3+architecture dedicated to early Barrett’s neoplasia was tested on these images. The AI system was neither trained with SCC images nor had it seen the test images prior to evaluation. The overlap between the three expert annotations („expert-agreement“) was the ground truth for evaluating AI performance. Results Barrett’s AI detected early SCC with a mean intersection over reference (IoR) of 92% when at least 1 pixel of the AI prediction overlapped with the expert-agreement. When the threshold was increased to 5%, 10%, and 20% overlap with the expert-agreement, the IoR was 88%, 85% and 82%, respectively. The mean Intersection Over Union (IoU) – a metric according to segmentation quality between the AI prediction and the expert-agreement – was 0.45. The mean expert IoU as a measure of agreement between the three experts was 0.60. Conclusions In the context of this pilot study, the predictions of SCC by a Barrett’s dedicated AI showed some overlap to the expert-agreement. Therefore, features learned from Barrett’s cancer-related training might be helpful also for SCC prediction. Our results allow different possible explanations. On the one hand, some Barrett’s cancer features generalize toward the related task of assessing early SCC. On the other hand, the Barrett’s AI is less specific to Barrett’s cancer than a general predictor of pathological tissue. However, we expect to enhance the detection quality significantly by extending the training to SCC-specific data. The insight of this study opens the way towards a transfer learning approach for more efficient training of AI to solve tasks in other domains. Y1 - 2024 U6 - https://doi.org/10.1055/s-0044-1783775 VL - 56 IS - S 02 SP - S428 PB - Thieme CY - Stuttgart ER - TY - GEN A1 - Scheppach, Markus W. A1 - Mendel, Robert A1 - Rauber, David A1 - Probst, Andreas A1 - Nagl, Sandra A1 - Römmele, Christoph A1 - Meinikheim, Michael A1 - Palm, Christoph A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - Artificial Intelligence (AI) improves endoscopists’ vessel detection during endoscopic submucosal dissection (ESD) T2 - Endoscopy N2 - Aims While AI has been successfully implemented in detecting and characterizing colonic polyps, its role in therapeutic endoscopy remains to be elucidated. Especially third space endoscopy procedures like ESD and peroral endoscopic myotomy (POEM) pose a technical challenge and the risk of operator-dependent complications like intraprocedural bleeding and perforation. Therefore, we aimed at developing an AI-algorithm for intraprocedural real time vessel detection during ESD and POEM. Methods A training dataset consisting of 5470 annotated still images from 59 full-length videos (47 ESD, 12 POEM) and 179681 unlabeled images was used to train a DeepLabV3+neural network with the ECMT semi-supervised learning method. Evaluation for vessel detection rate (VDR) and time (VDT) of 19 endoscopists with and without AI-support was performed using a testing dataset of 101 standardized video clips with 200 predefined blood vessels. Endoscopists were stratified into trainees and experts in third space endoscopy. Results The AI algorithm had a mean VDR of 93.5% and a median VDT of 0.32 seconds. AI support was associated with a statistically significant increase in VDR from 54.9% to 73.0% and from 59.0% to 74.1% for trainees and experts, respectively. VDT significantly decreased from 7.21 sec to 5.09 sec for trainees and from 6.10 sec to 5.38 sec for experts in the AI-support group. False positive (FP) readings occurred in 4.5% of frames. FP structures were detected significantly shorter than true positives (0.71 sec vs. 5.99 sec). Conclusions AI improved VDR and VDT of trainees and experts in third space endoscopy and may reduce performance variability during training. Further research is needed to evaluate the clinical impact of this new technology. Y1 - 2024 U6 - https://doi.org/10.1055/s-0044-1782891 VL - 56 IS - S 02 SP - S93 PB - Thieme CY - Stuttgart ER - TY - GEN A1 - Zellmer, Stephan A1 - Rauber, David A1 - Probst, Andreas A1 - Weber, Tobias A1 - Braun, Georg A1 - Römmele, Christoph A1 - Nagl, Sandra A1 - Schnoy, Elisabeth A1 - Messmann, Helmut A1 - Ebigbo, Alanna A1 - Palm, Christoph T1 - Artificial intelligence as a tool in the detection of the papillary ostium during ERCP T2 - Endoscopy N2 - Aims Endoscopic retrograde cholangiopancreaticography (ERCP) is the gold standard in the diagnosis as well as treatment of diseases of the pancreatobiliary tract. However, it is technically complex and has a relatively high complication rate. In particular, cannulation of the papillary ostium remains challenging. The aim of this study is to examine whether a deep-learning algorithm can be used to detect the major duodenal papilla and in particular the papillary ostium reliably and could therefore be a valuable tool for inexperienced endoscopists, particularly in training situation. Methods We analyzed a total of 654 retrospectively collected images of 85 patients. Both the major duodenal papilla and the ostium were then segmented. Afterwards, a neural network was trained using a deep-learning algorithm. A 5-fold cross-validation was performed. Subsequently, we ran the algorithm on 5 prospectively collected videos of ERCPs. Results 5-fold cross-validation on the 654 labeled data resulted in an F1 value of 0.8007, a sensitivity of 0.8409 and a specificity of 0.9757 for the class papilla, and an F1 value of 0.5724, a sensitivity of 0.5456 and a specificity of 0.9966 for the class ostium. Regardless of the class, the average F1 value (class papilla and class ostium) was 0.6866, the sensitivity 0.6933 and the specificity 0.9861. In 100% of cases the AI-detected localization of the papillary ostium in the prospectively collected videos corresponded to the localization of the cannulation performed by the endoscopist. Conclusions In the present study, the neural network was able to identify the major duodenal papilla with a high sensitivity and high specificity. In detecting the papillary ostium, the sensitivity was notably lower. However, when used on videos, the AI was able to identify the location of the subsequent cannulation with 100% accuracy. In the future, the neural network will be trained with more data. Thus, a suitable tool for ERCP could be established, especially in the training situation. Y1 - 2024 U6 - https://doi.org/10.1055/s-0044-1783138 VL - 56 IS - S 02 SP - S198 PB - Thieme CY - Stuttgart ER - TY - GEN A1 - Scheppach, Markus W. A1 - Nunes, Danilo Weber A1 - Arizi, X. A1 - Rauber, David A1 - Probst, Andreas A1 - Nagl, Sandra A1 - Römmele, Christoph A1 - Meinikheim, Michael A1 - Palm, Christoph A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - Procedural phase recognition in endoscopic submucosal dissection (ESD) using artificial intelligence (AI) T2 - Endoscopy N2 - Aims Recent evidence suggests the possibility of intraprocedural phase recognition in surgical operations as well as endoscopic interventions such as peroral endoscopic myotomy and endoscopic submucosal dissection (ESD) by AI-algorithms. The intricate measurement of intraprocedural phase distribution may deepen the understanding of the procedure. Furthermore, real-time quality assessment as well as automation of reporting may become possible. Therefore, we aimed to develop an AI-algorithm for intraprocedural phase recognition during ESD. Methods A training dataset of 364385 single images from 9 full-length ESD videos was compiled. Each frame was classified into one procedural phase. Phases included scope manipulation, marking, injection, application of electrical current and bleeding. Allocation of each frame was only possible to one category. This training dataset was used to train a Video Swin transformer to recognize the phases. Temporal information was included via logarithmic frame sampling. Validation was performed using two separate ESD videos with 29801 single frames. Results The validation yielded sensitivities of 97.81%, 97.83%, 95.53%, 85.01% and 87.55% for scope manipulation, marking, injection, electric application and bleeding, respectively. Specificities of 77.78%, 90.91%, 95.91%, 93.65% and 84.76% were measured for the same parameters. Conclusions The developed algorithm was able to classify full-length ESD videos on a frame-by-frame basis into the predefined classes with high sensitivities and specificities. Future research will aim at the development of quality metrics based on single-operator phase distribution. Y1 - 2024 U6 - https://doi.org/10.1055/s-0044-1783804 VL - 56 IS - S 02 SP - S439 PB - Thieme CY - Stuttgart ER - TY - GEN A1 - Scheppach, Markus W. A1 - Rauber, David A1 - Stallhofer, Johannes A1 - Muzalyova, Anna A1 - Otten, Vera A1 - Manzeneder, Carolin A1 - Schwamberger, Tanja A1 - Wanzl, Julia A1 - Schlottmann, Jakob A1 - Tadic, Vidan A1 - Probst, Andreas A1 - Schnoy, Elisabeth A1 - Römmele, Christoph A1 - Fleischmann, Carola A1 - Meinikheim, Michael A1 - Miller, Silvia A1 - Märkl, Bruno A1 - Palm, Christoph A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - Performance comparison of a deep learning algorithm with endoscopists in the detection of duodenal villous atrophy (VA) T2 - Endoscopy N2 - Aims  VA is an endoscopic finding of celiac disease (CD), which can easily be missed if pretest probability is low. In this study, we aimed to develop an artificial intelligence (AI) algorithm for the detection of villous atrophy on endoscopic images. Methods 858 images from 182 patients with VA and 846 images from 323 patients with normal duodenal mucosa were used for training and internal validation of an AI algorithm (ResNet18). A separate dataset was used for external validation, as well as determination of detection performance of experts, trainees and trainees with AI support. According to the AI consultation distribution, images were stratified into “easy” and “difficult”. Results Internal validation showed 82%, 85% and 84% for sensitivity, specificity and accuracy. External validation showed 90%, 76% and 84%. The algorithm was significantly more sensitive and accurate than trainees, trainees with AI support and experts in endoscopy. AI support in trainees was associated with significantly improved performance. While all endoscopists showed significantly lower detection for “difficult” images, AI performance remained stable. Conclusions The algorithm outperformed trainees and experts in sensitivity and accuracy for VA detection. The significant improvement with AI support suggests a potential clinical benefit. Stable performance of the algorithm in “easy” and “difficult” test images may indicate an advantage in macroscopically challenging cases. Y1 - 2023 U6 - https://doi.org/10.1055/s-0043-1765421 VL - 55 IS - S02 PB - Thieme ER - TY - GEN A1 - Scheppach, Markus W. A1 - Weber Nunes, Danilo A1 - Arizi, X. A1 - Rauber, David A1 - Probst, Andreas A1 - Nagl, Sandra A1 - Römmele, Christoph A1 - Palm, Christoph A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - Single frame workflow recognition during endoscopic submucosal dissection (ESD) using artificial intelligence (AI) T2 - Endoscopy N2 - Aims  Precise surgical phase recognition and evaluation may improve our understanding of complex endoscopic procedures. Furthermore, quality control measurements and endoscopy training could benefit from objective descriptions of surgical phase distributions. Therefore, we aimed to develop an artificial intelligence algorithm for frame-by-frame operational phase recognition during endoscopic submucosal dissection (ESD). Methods  Full length ESD-videos from 31 patients comprising 6.297.782 single images were collected retrospectively. Videos were annotated on a frame-by-frame basis for the operational macro-phases diagnostics, marking, injection, dissection and bleeding. Further subphases were the application of electrical current, visible injection of fluid into the submucosal space and scope manipulation, leading to 11 phases in total. 4.975.699 frames (21 patients) were used for training of a video swin transformer using uniform frame sampling for temporal information. Hyperparameter tuning was performed with 897.325 further frames (6 patients), while 424.758 frames (4 patients) were used for validation. Results  The overall F1 scores on the test dataset for the macro-phases and all 11 phases were 0.96 and 0.90, respectively. The recall values for diagnostics, marking, injection, dissection and bleeding were 1.00, 1.00, 0.95, 0.96 and 0.93, respectively. Conclusions  The algorithm classified operational phases during ESD with high accuracy. A precise evaluation of phase distribution may allow for the development of objective quality metrics for quality control and training. Y1 - 2025 U6 - https://doi.org/10.1055/s-0045-1806324 VL - 57 IS - S 02 SP - S511 PB - Thieme CY - Stuttgart ER - TY - GEN A1 - Roser, David A1 - Meinikheim, Michael A1 - Mendel, Robert A1 - Palm, Christoph A1 - Probst, Andreas A1 - Muzalyova, Anna A1 - Scheppach, Markus W. A1 - Nagl, Sandra A1 - Schnoy, Elisabeth A1 - Römmele, Christoph A1 - Schulz, Dominik Andreas Helmut Otto A1 - Schlottmann, Jakob A1 - Prinz, Friederike A1 - Rauber, David A1 - Rückert, Tobias A1 - Matsumura, Tomoaki A1 - Fernandez-Esparrach, G. A1 - Parsa, Nasim A1 - Byrne, Michael F. A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - Human-Computer Interaction: Impact of Artificial Intelligence on the diagnostic confidence of endoscopists assessing videos of Barrett’s esophagus T2 - Endoscopy N2 - Aims Human-computer interactions (HCI) may have a relevant impact on the performance of Artificial Intelligence (AI). Studies show that although endoscopists assessing Barrett’s esophagus (BE) with AI improve their performance significantly, they do not achieve the level of the stand-alone performance of AI. One aspect of HCI is the impact of AI on the degree of certainty and confidence displayed by the endoscopist. Indirectly, diagnostic confidence when using AI may be linked to trust and acceptance of AI. In a BE video study, we aimed to understand the impact of AI on the diagnostic confidence of endoscopists and the possible correlation with diagnostic performance. Methods 22 endoscopists from 12 centers with varying levels of BE experience reviewed ninety-six standardized endoscopy videos. Endoscopists were categorized into experts and non-experts and randomly assigned to assess the videos with and without AI. Participants were randomized in two arms: Arm A assessed videos first without AI and then with AI, while Arm B assessed videos in the opposite order. Evaluators were tasked with identifying BE-related neoplasia and rating their confidence with and without AI on a scale from 0 to 9. Results The utilization of AI in Arm A (without AI first, with AI second) significantly elevated confidence levels for experts and non-experts (7.1 to 8.0 and 6.1 to 6.6, respectively). Only non-experts benefitted from AI with a significant increase in accuracy (68.6% to 75.5%). Interestingly, while the confidence levels of experts without AI were higher than those of non-experts with AI, there was no significant difference in accuracy between these two groups (71.3% vs. 75.5%). In Arm B (with AI first, without AI second), experts and non-experts experienced a significant reduction in confidence (7.6 to 7.1 and 6.4 to 6.2, respectively), while maintaining consistent accuracy levels (71.8% to 71.8% and 67.5% to 67.1%, respectively). Conclusions AI significantly enhanced confidence levels for both expert and non-expert endoscopists. Endoscopists felt significantly more uncertain in their assessments without AI. Furthermore, experts with or without AI consistently displayed higher confidence levels than non-experts with AI, irrespective of comparable outcomes. These findings underscore the possible role of AI in improving diagnostic confidence during endoscopic assessment. Y1 - 2024 U6 - https://doi.org/10.1055/s-0044-1782859 SN - 1438-8812 VL - 56 IS - S 02 SP - 79 PB - Georg Thieme Verlag ER - TY - JOUR A1 - Ebigbo, Alanna A1 - Mendel, Robert A1 - Rückert, Tobias A1 - Schuster, Laurin A1 - Probst, Andreas A1 - Manzeneder, Johannes A1 - Prinz, Friederike A1 - Mende, Matthias A1 - Steinbrück, Ingo A1 - Faiss, Siegbert A1 - Rauber, David A1 - Souza Jr., Luis Antonio de A1 - Papa, João Paulo A1 - Deprez, Pierre A1 - Oyama, Tsuneo A1 - Takahashi, Akiko A1 - Seewald, Stefan A1 - Sharma, Prateek A1 - Byrne, Michael F. A1 - Palm, Christoph A1 - Messmann, Helmut T1 - Endoscopic prediction of submucosal invasion in Barrett’s cancer with the use of Artificial Intelligence: A pilot Study JF - Endoscopy N2 - Background and aims: The accurate differentiation between T1a and T1b Barrett’s cancer has both therapeutic and prognostic implications but is challenging even for experienced physicians. We trained an Artificial Intelligence (AI) system on the basis of deep artificial neural networks (deep learning) to differentiate between T1a and T1b Barrett’s cancer white-light images. Methods: Endoscopic images from three tertiary care centres in Germany were collected retrospectively. A deep learning system was trained and tested using the principles of cross-validation. A total of 230 white-light endoscopic images (108 T1a and 122 T1b) was evaluated with the AI-system. For comparison, the images were also classified by experts specialized in endoscopic diagnosis and treatment of Barrett’s cancer. Results: The sensitivity, specificity, F1 and accuracy of the AI-system in the differentiation between T1a and T1b cancer lesions was 0.77, 0.64, 0.73 and 0.71, respectively. There was no statistically significant difference between the performance of the AI-system and that of human experts with sensitivity, specificity, F1 and accuracy of 0.63, 0.78, 0.67 and 0.70 respectively. Conclusion: This pilot study demonstrates the first multicenter application of an AI-based system in the prediction of submucosal invasion in endoscopic images of Barrett’s cancer. AI scored equal to international experts in the field, but more work is necessary to improve the system and apply it to video sequences and in a real-life setting. Nevertheless, the correct prediction of submucosal invasion in Barret´s cancer remains challenging for both experts and AI. KW - Maschinelles Lernen KW - Neuronales Netz KW - Speiseröhrenkrebs KW - Diagnose KW - Artificial Intelligence KW - Machine learning KW - Adenocarcinoma KW - Barrett’s cancer KW - submucosal invasion Y1 - 2021 U6 - https://doi.org/10.1055/a-1311-8570 VL - 53 IS - 09 SP - 878 EP - 883 PB - Thieme CY - Stuttgart ER - TY - JOUR A1 - Mendel, Robert A1 - Rauber, David A1 - Souza Jr., Luis Antonio de A1 - Papa, João Paulo A1 - Palm, Christoph T1 - Error-Correcting Mean-Teacher: Corrections instead of consistency-targets applied to semi-supervised medical image segmentation JF - Computers in Biology and Medicine N2 - Semantic segmentation is an essential task in medical imaging research. Many powerful deep-learning-based approaches can be employed for this problem, but they are dependent on the availability of an expansive labeled dataset. In this work, we augment such supervised segmentation models to be suitable for learning from unlabeled data. Our semi-supervised approach, termed Error-Correcting Mean-Teacher, uses an exponential moving average model like the original Mean Teacher but introduces our new paradigm of error correction. The original segmentation network is augmented to handle this secondary correction task. Both tasks build upon the core feature extraction layers of the model. For the correction task, features detected in the input image are fused with features detected in the predicted segmentation and further processed with task-specific decoder layers. The combination of image and segmentation features allows the model to correct present mistakes in the given input pair. The correction task is trained jointly on the labeled data. On unlabeled data, the exponential moving average of the original network corrects the student’s prediction. The combined outputs of the students’ prediction with the teachers’ correction form the basis for the semi-supervised update. We evaluate our method with the 2017 and 2018 Robotic Scene Segmentation data, the ISIC 2017 and the BraTS 2020 Challenges, a proprietary Endoscopic Submucosal Dissection dataset, Cityscapes, and Pascal VOC 2012. Additionally, we analyze the impact of the individual components and examine the behavior when the amount of labeled data varies, with experiments performed on two distinct segmentation architectures. Our method shows improvements in terms of the mean Intersection over Union over the supervised baseline and competing methods. Code is available at https://github.com/CloneRob/ECMT. KW - Semi-supervised Segmentation KW - Mean-Teacher KW - Pseudo-labels KW - Medical Imaging Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:898-opus4-57790 SN - 0010-4825 N1 - Corresponding author der OTH Regensburg: Robert Mendel VL - 154 IS - March PB - Elsevier ER - TY - JOUR A1 - Römmele, Christoph A1 - Mendel, Robert A1 - Barrett, Caroline A1 - Kiesl, Hans A1 - Rauber, David A1 - Rückert, Tobias A1 - Kraus, Lisa A1 - Heinkele, Jakob A1 - Dhillon, Christine A1 - Grosser, Bianca A1 - Prinz, Friederike A1 - Wanzl, Julia A1 - Fleischmann, Carola A1 - Nagl, Sandra A1 - Schnoy, Elisabeth A1 - Schlottmann, Jakob A1 - Dellon, Evan S. A1 - Messmann, Helmut A1 - Palm, Christoph A1 - Ebigbo, Alanna T1 - An artificial intelligence algorithm is highly accurate for detecting endoscopic features of eosinophilic esophagitis JF - Scientific Reports N2 - The endoscopic features associated with eosinophilic esophagitis (EoE) may be missed during routine endoscopy. We aimed to develop and evaluate an Artificial Intelligence (AI) algorithm for detecting and quantifying the endoscopic features of EoE in white light images, supplemented by the EoE Endoscopic Reference Score (EREFS). An AI algorithm (AI-EoE) was constructed and trained to differentiate between EoE and normal esophagus using endoscopic white light images extracted from the database of the University Hospital Augsburg. In addition to binary classification, a second algorithm was trained with specific auxiliary branches for each EREFS feature (AI-EoE-EREFS). The AI algorithms were evaluated on an external data set from the University of North Carolina, Chapel Hill (UNC), and compared with the performance of human endoscopists with varying levels of experience. The overall sensitivity, specificity, and accuracy of AI-EoE were 0.93 for all measures, while the AUC was 0.986. With additional auxiliary branches for the EREFS categories, the AI algorithm (AI-EoEEREFS) performance improved to 0.96, 0.94, 0.95, and 0.992 for sensitivity, specificity, accuracy, and AUC, respectively. AI-EoE and AI-EoE-EREFS performed significantly better than endoscopy beginners and senior fellows on the same set of images. An AI algorithm can be trained to detect and quantify endoscopic features of EoE with excellent performance scores. The addition of the EREFS criteria improved the performance of the AI algorithm, which performed significantly better than endoscopists with a lower or medium experience level. KW - Artificial Intelligence KW - Smart Endoscopy KW - eosinophilic esophagitis Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:898-opus4-46928 VL - 12 PB - Nature Portfolio CY - London ER - TY - GEN A1 - Römmele, Christoph A1 - Mendel, Robert A1 - Rauber, David A1 - Rückert, Tobias A1 - Byrne, Michael F. A1 - Palm, Christoph A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - Endoscopic Diagnosis of Eosinophilic Esophagitis Using a deep Learning Algorithm T2 - Endoscopy N2 - Aims Eosinophilic esophagitis (EoE) is easily missed during endoscopy, either because physicians are not familiar with its endoscopic features or the morphologic changes are too subtle. In this preliminary paper, we present the first attempt to detect EoE in endoscopic white light (WL) images using a deep learning network (EoE-AI). Methods 401 WL images of eosinophilic esophagitis and 871 WL images of normal esophageal mucosa were evaluated. All images were assessed for the Endoscopic Reference score (EREFS) (edema, rings, exudates, furrows, strictures). Images with strictures were excluded. EoE was defined as the presence of at least 15 eosinophils per high power field on biopsy. A convolutional neural network based on the ResNet architecture with several five-fold cross-validation runs was used. Adding auxiliary EREFS-classification branches to the neural network allowed the inclusion of the scores as optimization criteria during training. EoE-AI was evaluated for sensitivity, specificity, and F1-score. In addition, two human endoscopists evaluated the images. Results EoE-AI showed a mean sensitivity, specificity, and F1 of 0.759, 0.976, and 0.834 respectively, averaged over the five distinct cross-validation runs. With the EREFS-augmented architecture, a mean sensitivity, specificity, and F1-score of 0.848, 0.945, and 0.861 could be demonstrated respectively. In comparison, the two human endoscopists had an average sensitivity, specificity, and F1-score of 0.718, 0.958, and 0.793. Conclusions To the best of our knowledge, this is the first application of deep learning to endoscopic images of EoE which were also assessed after augmentation with the EREFS-score. The next step is the evaluation of EoE-AI using an external dataset. We then plan to assess the EoE-AI tool on endoscopic videos, and also in real-time. This preliminary work is encouraging regarding the ability for AI to enhance physician detection of EoE, and potentially to do a true “optical biopsy” but more work is needed. KW - Eosinophilic Esophagitis KW - Endoscopy KW - Deep Learning Y1 - 2021 U6 - https://doi.org/10.1055/s-0041-1724274 VL - 53 IS - S 01 PB - Georg Thieme Verlag CY - Stuttgart ER - TY - JOUR A1 - Scheppach, Markus W. A1 - Rauber, David A1 - Stallhofer, Johannes A1 - Muzalyova, Anna A1 - Otten, Vera A1 - Manzeneder, Carolin A1 - Schwamberger, Tanja A1 - Wanzl, Julia A1 - Schlottmann, Jakob A1 - Tadic, Vidan A1 - Probst, Andreas A1 - Schnoy, Elisabeth A1 - Römmele, Christoph A1 - Fleischmann, Carola A1 - Meinikheim, Michael A1 - Miller, Silvia A1 - Märkl, Bruno A1 - Stallmach, Andreas A1 - Palm, Christoph A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - Detection of duodenal villous atrophy on endoscopic images using a deep learning algorithm JF - Gastrointestinal Endoscopy N2 - Background and aims Celiac disease with its endoscopic manifestation of villous atrophy is underdiagnosed worldwide. The application of artificial intelligence (AI) for the macroscopic detection of villous atrophy at routine esophagogastroduodenoscopy may improve diagnostic performance. Methods A dataset of 858 endoscopic images of 182 patients with villous atrophy and 846 images from 323 patients with normal duodenal mucosa was collected and used to train a ResNet 18 deep learning model to detect villous atrophy. An external data set was used to test the algorithm, in addition to six fellows and four board certified gastroenterologists. Fellows could consult the AI algorithm’s result during the test. From their consultation distribution, a stratification of test images into “easy” and “difficult” was performed and used for classified performance measurement. Results External validation of the AI algorithm yielded values of 90 %, 76 %, and 84 % for sensitivity, specificity, and accuracy, respectively. Fellows scored values of 63 %, 72 % and 67 %, while the corresponding values in experts were 72 %, 69 % and 71 %, respectively. AI consultation significantly improved all trainee performance statistics. While fellows and experts showed significantly lower performance for “difficult” images, the performance of the AI algorithm was stable. Conclusion In this study, an AI algorithm outperformed endoscopy fellows and experts in the detection of villous atrophy on endoscopic still images. AI decision support significantly improved the performance of non-expert endoscopists. The stable performance on “difficult” images suggests a further positive add-on effect in challenging cases. KW - celiac disease KW - villous atrophy KW - endoscopy detection KW - artificial intelligence Y1 - 2023 U6 - https://doi.org/10.1016/j.gie.2023.01.006 PB - Elsevier ER - TY - JOUR A1 - Scheppach, Markus W. A1 - Mendel, Robert A1 - Muzalyova, Anna A1 - Rauber, David A1 - Probst, Andreas A1 - Nagl, Sandra A1 - Römmele, Christoph A1 - Yip, Hon Chi A1 - Lau, Louis Ho Shing A1 - Gölder, Stefan Karl A1 - Schmidt, Arthur A1 - Kouladouros, Konstantinos A1 - Abdelhafez, Mohamed A1 - Walter, Benjamin M. A1 - Meinikheim, Michael A1 - Chiu, Philip Wai Yan A1 - Palm, Christoph A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - Artificial intelligence improves submucosal vessel detection during third space endoscopy JF - Endoscopy N2 - Background and study aims: While artificial intelligence (AI) shows high potential in decision support for diagnostic gastrointestinal endoscopy, its role in therapeutic endoscopy remains unclear. Third space endoscopic procedures pose the risk of intraprocedural bleeding. Therefore, we aimed to develop an AI algorithm for intraprocedural blood vessel detection. Patients and Methods: Using a test dataset with 101 standardized video clips containing 200 predefined submucosal blood vessels, 19 endoscopists were evaluated for the vessel detection rate (VDR) and time (VDT) with and without support of an AI algorithm. Test subjects were grouped according to experience in ESD. Results: With AI support, endoscopists VDR increased from 56.4% [CI 54.1–58.6] to 72.4% [CI 70.3–74.4]. Endoscopists‘ VDT dropped from 6.7sec [CI 6.2-7.1] to 5.2sec [CI 4.8-5.7]. False positive (FP) readings appeared in 4.5% of frames and were marked significantly shorter than true positives (6.0sec [CI 5.28-6.70] vs. 0.7sec [CI 0.55-0.87]). Conclusions: AI improved the vessel detection rate and time of endoscopists during third space endoscopy. While these data need to be corroborated by clinical trials, AI may prove to be an invaluable tool for the improvement of endoscopic interventions. KW - Artificial Intelligence KW - Third Space Endoscopy Y1 - 2025 U6 - https://doi.org/10.1055/a-2534-1164 PB - Thieme CY - Stuttgart ER - TY - GEN A1 - Scheppach, Markus W. A1 - Rauber, David A1 - Mendel, Robert A1 - Palm, Christoph A1 - Byrne, Michael F. A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - Detection Of Celiac Disease Using A Deep Learning Algorithm T2 - Endoscopy N2 - Aims Celiac disease (CD) is a complex condition caused by an autoimmune reaction to ingested gluten. Due to its polymorphic manifestation and subtle endoscopic presentation, the diagnosis is difficult and thus the disorder is underreported. We aimed to use deep learning to identify celiac disease on endoscopic images of the small bowel. Methods Patients with small intestinal histology compatible with CD (MARSH classification I-III) were extracted retrospectively from the database of Augsburg University hospital. They were compared to patients with no clinical signs of CD and histologically normal small intestinal mucosa. In a first step MARSH III and normal small intestinal mucosa were differentiated with the help of a deep learning algorithm. For this, the endoscopic white light images were divided into five equal-sized subsets. We avoided splitting the images of one patient into several subsets. A ResNet-50 model was trained with the images from four subsets and then validated with the remaining subset. This process was repeated for each subset, such that each subset was validated once. Sensitivity, specificity, and harmonic mean (F1) of the algorithm were determined. Results The algorithm showed values of 0.83, 0.88, and 0.84 for sensitivity, specificity, and F1, respectively. Further data showing a comparison between the detection rate of the AI model and that of experienced endoscopists will be available at the time of the upcoming conference. Conclusions We present the first clinical report on the use of a deep learning algorithm for the detection of celiac disease using endoscopic images. Further evaluation on an external data set, as well as in the detection of CD in real-time, will follow. However, this work at least suggests that AI can assist endoscopists in the endoscopic diagnosis of CD, and ultimately may be able to do a true optical biopsy in live-time. KW - Celiac Disease KW - Deep Learning Y1 - 2021 U6 - https://doi.org/10.1055/s-0041-1724970 N1 - Digital poster exhibition VL - 53 IS - S 01 PB - Georg Thieme Verlag CY - Stuttgart ER - TY - CHAP A1 - Rauber, David A1 - Mendel, Robert A1 - Scheppach, Markus W. A1 - Ebigbo, Alanna A1 - Messmann, Helmut A1 - Palm, Christoph T1 - Analysis of Celiac Disease with Multimodal Deep Learning T2 - Bildverarbeitung für die Medizin 2022: Proceedings, German Workshop on Medical Image Computing, Heidelberg, June 26-28, 2022 N2 - Celiac disease is an autoimmune disorder caused by gluten that results in an inflammatory response of the small intestine.We investigated whether celiac disease can be detected using endoscopic images through a deep learning approach. The results show that additional clinical parameters can improve the classification accuracy. In this work, we distinguished between healthy tissue and Marsh III, according to the Marsh score system. We first trained a baseline network to classify endoscopic images of the small bowel into these two classes and then augmented the approach with a multimodality component that took the antibody status into account. KW - Deep Learning KW - Endoscopy Y1 - 2022 U6 - https://doi.org/10.1007/978-3-658-36932-3_25 SP - 115 EP - 120 PB - Springer Vieweg CY - Wiesbaden ER - TY - CHAP A1 - Mendel, Robert A1 - Rauber, David A1 - Palm, Christoph T1 - Exploring the Effects of Contrastive Learning on Homogeneous Medical Image Data T2 - Bildverarbeitung für die Medizin 2023: Proceedings, German Workshop on Medical Image Computing, July 2– 4, 2023, Braunschweig N2 - We investigate contrastive learning in a multi-task learning setting classifying and segmenting early Barrett’s cancer. How can contrastive learning be applied in a domain with few classes and low inter-class and inter-sample variance, potentially enabling image retrieval or image attribution? We introduce a data sampling strategy that mines per-lesion data for positive samples and keeps a queue of the recent projections as negative samples. We propose a masking strategy for the NT-Xent loss that keeps the negative set pure and removes samples from the same lesion. We show cohesion and uniqueness improvements of the proposed method in feature space. The introduction of the auxiliary objective does not affect the performance but adds the ability to indicate similarity between lesions. Therefore, the approach could enable downstream auto-documentation tasks on homogeneous medical image data. Y1 - 2023 U6 - https://doi.org/10.1007/978-3-658-41657-7 SP - 128 EP - 13 PB - Springer Vieweg CY - Wiesbaden ER - TY - GEN A1 - Rückert, Tobias A1 - Rieder, Maximilian A1 - Rauber, David A1 - Xiao, Michel A1 - Humolli, Eg A1 - Feussner, Hubertus A1 - Wilhelm, Dirk A1 - Palm, Christoph T1 - Augmenting instrument segmentation in video sequences of minimally invasive surgery by synthetic smoky frames T2 - International Journal of Computer Assisted Radiology and Surgery KW - Surgical instrument segmentation KW - smoke simulation KW - unpaired image-to-image translation KW - robot-assisted surgery Y1 - 2023 U6 - https://doi.org/10.1007/s11548-023-02878-2 VL - 18 IS - Suppl 1 SP - S54 EP - S56 PB - Springer Nature ER - TY - GEN A1 - Scheppach, Markus W. A1 - Mendel, Robert A1 - Probst, Andreas A1 - Rauber, David A1 - Rückert, Tobias A1 - Meinikheim, Michael A1 - Palm, Christoph A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - Real-time detection and delineation of tissue during third-space endoscopy using artificial intelligence (AI) T2 - Endoscopy N2 - Aims  AI has proven great potential in assisting endoscopists in diagnostics, however its role in therapeutic endoscopy remains unclear. Endoscopic submucosal dissection (ESD) is a technically demanding intervention with a slow learning curve and relevant risks like bleeding and perforation. Therefore, we aimed to develop an algorithm for the real-time detection and delineation of relevant structures during third-space endoscopy. Methods  5470 still images from 59 full length videos (47 ESD, 12 POEM) were annotated. 179681 additional unlabeled images were added to the training dataset. Consequently, a DeepLabv3+ neural network architecture was trained with the ECMT semi-supervised algorithm (under review elsewhere). Evaluation of vessel detection was performed on a dataset of 101 standardized video clips from 15 separate third-space endoscopy videos with 200 predefined blood vessels. Results  Internal validation yielded an overall mean Dice score of 85% (68% for blood vessels, 86% for submucosal layer, 88% for muscle layer). On the video test data, the overall vessel detection rate (VDR) was 94% (96% for ESD, 74% for POEM). The median overall vessel detection time (VDT) was 0.32 sec (0.3 sec for ESD, 0.62 sec for POEM). Conclusions  Evaluation of the developed algorithm on a video test dataset showed high VDR and quick VDT, especially for ESD. Further research will focus on a possible clinical benefit of the AI application for VDR and VDT during third-space endoscopy. KW - Speiseröhrenkrankheit KW - Künstliche Intelligenz KW - Artificial Intelligence Y1 - 2023 U6 - https://doi.org/10.1055/s-0043-1765128 VL - 55 IS - S02 SP - S53 EP - S54 PB - Thieme ER - TY - JOUR A1 - Souza Jr., Luis Antonio de A1 - Passos, Leandro A. A1 - Santana, Marcos Cleison S. A1 - Mendel, Robert A1 - Rauber, David A1 - Ebigbo, Alanna A1 - Probst, Andreas A1 - Messmann, Helmut A1 - Papa, João Paulo A1 - Palm, Christoph T1 - Layer-selective deep representation to improve esophageal cancer classification JF - Medical & Biological Engineering & Computing N2 - Even though artificial intelligence and machine learning have demonstrated remarkable performances in medical image computing, their accountability and transparency level must be improved to transfer this success into clinical practice. The reliability of machine learning decisions must be explained and interpreted, especially for supporting the medical diagnosis.For this task, the deep learning techniques’ black-box nature must somehow be lightened up to clarify its promising results. Hence, we aim to investigate the impact of the ResNet-50 deep convolutional design for Barrett’s esophagus and adenocarcinoma classification. For such a task, and aiming at proposing a two-step learning technique, the output of each convolutional layer that composes the ResNet-50 architecture was trained and classified for further definition of layers that would provide more impact in the architecture. We showed that local information and high-dimensional features are essential to improve the classification for our task. Besides, we observed a significant improvement when the most discriminative layers expressed more impact in the training and classification of ResNet-50 for Barrett’s esophagus and adenocarcinoma classification, demonstrating that both human knowledge and computational processing may influence the correct learning of such a problem. KW - Multistep training KW - Barrett’s esophagus detection KW - Convolutional neural networks KW - Deep learning Y1 - 2024 U6 - https://doi.org/10.1007/s11517-024-03142-8 VL - 62 SP - 3355 EP - 3372 PB - Springer Nature CY - Heidelberg ER - TY - CHAP A1 - Weber Nunes, Danilo A1 - Rauber, David A1 - Palm, Christoph ED - Palm, Christoph ED - Breininger, Katharina ED - Deserno, Thomas M. ED - Handels, Heinz ED - Maier, Andreas ED - Maier-Hein, Klaus H. ED - Tolxdorff, Thomas T1 - Self-supervised 3D Vision Transformer Pre-training for Robust Brain Tumor Classification T2 - Bildverarbeitung für die Medizin 2025: Proceedings, German Conference on Medical Image Computing, Regensburg March 09-11, 2025 N2 - Brain tumors pose significant challenges in neurology, making precise classification crucial for prognosis and treatment planning. This work investigates the effectiveness of a self-supervised learning approach–masked autoencoding (MAE)–to pre-train a vision transformer (ViT) model for brain tumor classification. Our method uses non-domain specific data, leveraging the ADNI and OASIS-3 MRI datasets, which primarily focus on degenerative diseases, for pretraining. The model is subsequently fine-tuned and evaluated on the BraTS glioma and meningioma datasets, representing a novel use of these datasets for tumor classification. The pre-trained MAE ViT model achieves an average F1 score of 0.91 in a 5-fold cross-validation setting, outperforming the nnU-Net encoder trained from scratch, particularly under limited data conditions. These findings highlight the potential of self-supervised MAE in enhancing brain tumor classification accuracy, even with restricted labeled data. Y1 - 2025 U6 - https://doi.org/10.1007/978-3-658-47422-5_69 SP - 298 EP - 303 PB - Springer Vieweg CY - Wiesbaden ER - TY - CHAP A1 - Gutbrod, Max A1 - Geisler, Benedikt A1 - Rauber, David A1 - Palm, Christoph ED - Maier, Andreas ED - Deserno, Thomas M. ED - Handels, Heinz ED - Maier-Hein, Klaus H. ED - Palm, Christoph ED - Tolxdorff, Thomas T1 - Data Augmentation for Images of Chronic Foot Wounds T2 - Bildverarbeitung für die Medizin 2024: Proceedings, German Workshop on Medical Image Computing, March 10-12, 2024, Erlangen N2 - Training data for Neural Networks is often scarce in the medical domain, which often results in models that struggle to generalize and consequently showpoor performance on unseen datasets. Generally, adding augmentation methods to the training pipeline considerably enhances a model’s performance. Using the dataset of the Foot Ulcer Segmentation Challenge, we analyze two additional augmentation methods in the domain of chronic foot wounds - local warping of wound edges along with projection and blurring of shapes inside wounds. Our experiments show that improvements in the Dice similarity coefficient and Normalized Surface Distance metrics depend on a sensible selection of those augmentation methods. Y1 - 2024 U6 - https://doi.org/10.1007/978-3-658-44037-4_71 SP - 261 EP - 266 PB - Springer CY - Wiesbaden ER -