TY - GEN A1 - Ebigbo, Alanna A1 - Rauber, David A1 - Ayoub, Mousa A1 - Birzle, Lisa A1 - Matsumura, Tomoaki A1 - Probst, Andreas A1 - Steinbrück, Ingo A1 - Nagl, Sandra A1 - Römmele, Christoph A1 - Meinikheim, Michael A1 - Scheppach, Markus W. A1 - Palm, Christoph A1 - Messmann, Helmut T1 - Early Esophageal Cancer and the Generalizability of Artificial Intelligence T2 - Endoscopy N2 - Aims Artificial Intelligence (AI) systems in gastrointestinal endoscopy are narrow because they are trained to solve only one specific task. Unlike Narrow-AI, general AI systems may be able to solve multiple and unrelated tasks. We aimed to understand whether an AI system trained to detect, characterize, and segment early Barrett’s neoplasia (Barrett’s AI) is only capable of detecting this pathology or can also detect and segment other diseases like early squamous cell cancer (SCC). Methods 120 white light (WL) and narrow-band endoscopic images (NBI) from 60 patients (1 WL and 1 NBI image per patient) were extracted from the endoscopic database of the University Hospital Augsburg. Images were annotated by three expert endoscopists with extensive experience in the diagnosis and endoscopic resection of early esophageal neoplasias. An AI system based on DeepLabV3+architecture dedicated to early Barrett’s neoplasia was tested on these images. The AI system was neither trained with SCC images nor had it seen the test images prior to evaluation. The overlap between the three expert annotations („expert-agreement“) was the ground truth for evaluating AI performance. Results Barrett’s AI detected early SCC with a mean intersection over reference (IoR) of 92% when at least 1 pixel of the AI prediction overlapped with the expert-agreement. When the threshold was increased to 5%, 10%, and 20% overlap with the expert-agreement, the IoR was 88%, 85% and 82%, respectively. The mean Intersection Over Union (IoU) – a metric according to segmentation quality between the AI prediction and the expert-agreement – was 0.45. The mean expert IoU as a measure of agreement between the three experts was 0.60. Conclusions In the context of this pilot study, the predictions of SCC by a Barrett’s dedicated AI showed some overlap to the expert-agreement. Therefore, features learned from Barrett’s cancer-related training might be helpful also for SCC prediction. Our results allow different possible explanations. On the one hand, some Barrett’s cancer features generalize toward the related task of assessing early SCC. On the other hand, the Barrett’s AI is less specific to Barrett’s cancer than a general predictor of pathological tissue. However, we expect to enhance the detection quality significantly by extending the training to SCC-specific data. The insight of this study opens the way towards a transfer learning approach for more efficient training of AI to solve tasks in other domains. Y1 - 2024 U6 - https://doi.org/10.1055/s-0044-1783775 VL - 56 IS - S 02 SP - S428 PB - Thieme CY - Stuttgart ER - TY - GEN A1 - Scheppach, Markus W. A1 - Mendel, Robert A1 - Rauber, David A1 - Probst, Andreas A1 - Nagl, Sandra A1 - Römmele, Christoph A1 - Meinikheim, Michael A1 - Palm, Christoph A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - Artificial Intelligence (AI) improves endoscopists’ vessel detection during endoscopic submucosal dissection (ESD) T2 - Endoscopy N2 - Aims While AI has been successfully implemented in detecting and characterizing colonic polyps, its role in therapeutic endoscopy remains to be elucidated. Especially third space endoscopy procedures like ESD and peroral endoscopic myotomy (POEM) pose a technical challenge and the risk of operator-dependent complications like intraprocedural bleeding and perforation. Therefore, we aimed at developing an AI-algorithm for intraprocedural real time vessel detection during ESD and POEM. Methods A training dataset consisting of 5470 annotated still images from 59 full-length videos (47 ESD, 12 POEM) and 179681 unlabeled images was used to train a DeepLabV3+neural network with the ECMT semi-supervised learning method. Evaluation for vessel detection rate (VDR) and time (VDT) of 19 endoscopists with and without AI-support was performed using a testing dataset of 101 standardized video clips with 200 predefined blood vessels. Endoscopists were stratified into trainees and experts in third space endoscopy. Results The AI algorithm had a mean VDR of 93.5% and a median VDT of 0.32 seconds. AI support was associated with a statistically significant increase in VDR from 54.9% to 73.0% and from 59.0% to 74.1% for trainees and experts, respectively. VDT significantly decreased from 7.21 sec to 5.09 sec for trainees and from 6.10 sec to 5.38 sec for experts in the AI-support group. False positive (FP) readings occurred in 4.5% of frames. FP structures were detected significantly shorter than true positives (0.71 sec vs. 5.99 sec). Conclusions AI improved VDR and VDT of trainees and experts in third space endoscopy and may reduce performance variability during training. Further research is needed to evaluate the clinical impact of this new technology. Y1 - 2024 U6 - https://doi.org/10.1055/s-0044-1782891 VL - 56 IS - S 02 SP - S93 PB - Thieme CY - Stuttgart ER - TY - GEN A1 - Zellmer, Stephan A1 - Rauber, David A1 - Probst, Andreas A1 - Weber, Tobias A1 - Braun, Georg A1 - Römmele, Christoph A1 - Nagl, Sandra A1 - Schnoy, Elisabeth A1 - Messmann, Helmut A1 - Ebigbo, Alanna A1 - Palm, Christoph T1 - Artificial intelligence as a tool in the detection of the papillary ostium during ERCP T2 - Endoscopy N2 - Aims Endoscopic retrograde cholangiopancreaticography (ERCP) is the gold standard in the diagnosis as well as treatment of diseases of the pancreatobiliary tract. However, it is technically complex and has a relatively high complication rate. In particular, cannulation of the papillary ostium remains challenging. The aim of this study is to examine whether a deep-learning algorithm can be used to detect the major duodenal papilla and in particular the papillary ostium reliably and could therefore be a valuable tool for inexperienced endoscopists, particularly in training situation. Methods We analyzed a total of 654 retrospectively collected images of 85 patients. Both the major duodenal papilla and the ostium were then segmented. Afterwards, a neural network was trained using a deep-learning algorithm. A 5-fold cross-validation was performed. Subsequently, we ran the algorithm on 5 prospectively collected videos of ERCPs. Results 5-fold cross-validation on the 654 labeled data resulted in an F1 value of 0.8007, a sensitivity of 0.8409 and a specificity of 0.9757 for the class papilla, and an F1 value of 0.5724, a sensitivity of 0.5456 and a specificity of 0.9966 for the class ostium. Regardless of the class, the average F1 value (class papilla and class ostium) was 0.6866, the sensitivity 0.6933 and the specificity 0.9861. In 100% of cases the AI-detected localization of the papillary ostium in the prospectively collected videos corresponded to the localization of the cannulation performed by the endoscopist. Conclusions In the present study, the neural network was able to identify the major duodenal papilla with a high sensitivity and high specificity. In detecting the papillary ostium, the sensitivity was notably lower. However, when used on videos, the AI was able to identify the location of the subsequent cannulation with 100% accuracy. In the future, the neural network will be trained with more data. Thus, a suitable tool for ERCP could be established, especially in the training situation. Y1 - 2024 U6 - https://doi.org/10.1055/s-0044-1783138 VL - 56 IS - S 02 SP - S198 PB - Thieme CY - Stuttgart ER - TY - GEN A1 - Scheppach, Markus W. A1 - Nunes, Danilo Weber A1 - Arizi, X. A1 - Rauber, David A1 - Probst, Andreas A1 - Nagl, Sandra A1 - Römmele, Christoph A1 - Meinikheim, Michael A1 - Palm, Christoph A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - Procedural phase recognition in endoscopic submucosal dissection (ESD) using artificial intelligence (AI) T2 - Endoscopy N2 - Aims Recent evidence suggests the possibility of intraprocedural phase recognition in surgical operations as well as endoscopic interventions such as peroral endoscopic myotomy and endoscopic submucosal dissection (ESD) by AI-algorithms. The intricate measurement of intraprocedural phase distribution may deepen the understanding of the procedure. Furthermore, real-time quality assessment as well as automation of reporting may become possible. Therefore, we aimed to develop an AI-algorithm for intraprocedural phase recognition during ESD. Methods A training dataset of 364385 single images from 9 full-length ESD videos was compiled. Each frame was classified into one procedural phase. Phases included scope manipulation, marking, injection, application of electrical current and bleeding. Allocation of each frame was only possible to one category. This training dataset was used to train a Video Swin transformer to recognize the phases. Temporal information was included via logarithmic frame sampling. Validation was performed using two separate ESD videos with 29801 single frames. Results The validation yielded sensitivities of 97.81%, 97.83%, 95.53%, 85.01% and 87.55% for scope manipulation, marking, injection, electric application and bleeding, respectively. Specificities of 77.78%, 90.91%, 95.91%, 93.65% and 84.76% were measured for the same parameters. Conclusions The developed algorithm was able to classify full-length ESD videos on a frame-by-frame basis into the predefined classes with high sensitivities and specificities. Future research will aim at the development of quality metrics based on single-operator phase distribution. Y1 - 2024 U6 - https://doi.org/10.1055/s-0044-1783804 VL - 56 IS - S 02 SP - S439 PB - Thieme CY - Stuttgart ER - TY - GEN A1 - Scheppach, Markus W. A1 - Rauber, David A1 - Stallhofer, Johannes A1 - Muzalyova, Anna A1 - Otten, Vera A1 - Manzeneder, Carolin A1 - Schwamberger, Tanja A1 - Wanzl, Julia A1 - Schlottmann, Jakob A1 - Tadic, Vidan A1 - Probst, Andreas A1 - Schnoy, Elisabeth A1 - Römmele, Christoph A1 - Fleischmann, Carola A1 - Meinikheim, Michael A1 - Miller, Silvia A1 - Märkl, Bruno A1 - Palm, Christoph A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - Performance comparison of a deep learning algorithm with endoscopists in the detection of duodenal villous atrophy (VA) T2 - Endoscopy N2 - Aims  VA is an endoscopic finding of celiac disease (CD), which can easily be missed if pretest probability is low. In this study, we aimed to develop an artificial intelligence (AI) algorithm for the detection of villous atrophy on endoscopic images. Methods 858 images from 182 patients with VA and 846 images from 323 patients with normal duodenal mucosa were used for training and internal validation of an AI algorithm (ResNet18). A separate dataset was used for external validation, as well as determination of detection performance of experts, trainees and trainees with AI support. According to the AI consultation distribution, images were stratified into “easy” and “difficult”. Results Internal validation showed 82%, 85% and 84% for sensitivity, specificity and accuracy. External validation showed 90%, 76% and 84%. The algorithm was significantly more sensitive and accurate than trainees, trainees with AI support and experts in endoscopy. AI support in trainees was associated with significantly improved performance. While all endoscopists showed significantly lower detection for “difficult” images, AI performance remained stable. Conclusions The algorithm outperformed trainees and experts in sensitivity and accuracy for VA detection. The significant improvement with AI support suggests a potential clinical benefit. Stable performance of the algorithm in “easy” and “difficult” test images may indicate an advantage in macroscopically challenging cases. Y1 - 2023 U6 - https://doi.org/10.1055/s-0043-1765421 VL - 55 IS - S02 PB - Thieme ER - TY - GEN A1 - Scheppach, Markus W. A1 - Weber Nunes, Danilo A1 - Arizi, X. A1 - Rauber, David A1 - Probst, Andreas A1 - Nagl, Sandra A1 - Römmele, Christoph A1 - Palm, Christoph A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - Single frame workflow recognition during endoscopic submucosal dissection (ESD) using artificial intelligence (AI) T2 - Endoscopy N2 - Aims  Precise surgical phase recognition and evaluation may improve our understanding of complex endoscopic procedures. Furthermore, quality control measurements and endoscopy training could benefit from objective descriptions of surgical phase distributions. Therefore, we aimed to develop an artificial intelligence algorithm for frame-by-frame operational phase recognition during endoscopic submucosal dissection (ESD). Methods  Full length ESD-videos from 31 patients comprising 6.297.782 single images were collected retrospectively. Videos were annotated on a frame-by-frame basis for the operational macro-phases diagnostics, marking, injection, dissection and bleeding. Further subphases were the application of electrical current, visible injection of fluid into the submucosal space and scope manipulation, leading to 11 phases in total. 4.975.699 frames (21 patients) were used for training of a video swin transformer using uniform frame sampling for temporal information. Hyperparameter tuning was performed with 897.325 further frames (6 patients), while 424.758 frames (4 patients) were used for validation. Results  The overall F1 scores on the test dataset for the macro-phases and all 11 phases were 0.96 and 0.90, respectively. The recall values for diagnostics, marking, injection, dissection and bleeding were 1.00, 1.00, 0.95, 0.96 and 0.93, respectively. Conclusions  The algorithm classified operational phases during ESD with high accuracy. A precise evaluation of phase distribution may allow for the development of objective quality metrics for quality control and training. Y1 - 2025 U6 - https://doi.org/10.1055/s-0045-1806324 VL - 57 IS - S 02 SP - S511 PB - Thieme CY - Stuttgart ER - TY - GEN A1 - Roser, David A1 - Meinikheim, Michael A1 - Mendel, Robert A1 - Palm, Christoph A1 - Probst, Andreas A1 - Muzalyova, Anna A1 - Scheppach, Markus W. A1 - Nagl, Sandra A1 - Schnoy, Elisabeth A1 - Römmele, Christoph A1 - Schulz, Dominik Andreas Helmut Otto A1 - Schlottmann, Jakob A1 - Prinz, Friederike A1 - Rauber, David A1 - Rückert, Tobias A1 - Matsumura, Tomoaki A1 - Fernandez-Esparrach, G. A1 - Parsa, Nasim A1 - Byrne, Michael F. A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - Human-Computer Interaction: Impact of Artificial Intelligence on the diagnostic confidence of endoscopists assessing videos of Barrett’s esophagus T2 - Endoscopy N2 - Aims Human-computer interactions (HCI) may have a relevant impact on the performance of Artificial Intelligence (AI). Studies show that although endoscopists assessing Barrett’s esophagus (BE) with AI improve their performance significantly, they do not achieve the level of the stand-alone performance of AI. One aspect of HCI is the impact of AI on the degree of certainty and confidence displayed by the endoscopist. Indirectly, diagnostic confidence when using AI may be linked to trust and acceptance of AI. In a BE video study, we aimed to understand the impact of AI on the diagnostic confidence of endoscopists and the possible correlation with diagnostic performance. Methods 22 endoscopists from 12 centers with varying levels of BE experience reviewed ninety-six standardized endoscopy videos. Endoscopists were categorized into experts and non-experts and randomly assigned to assess the videos with and without AI. Participants were randomized in two arms: Arm A assessed videos first without AI and then with AI, while Arm B assessed videos in the opposite order. Evaluators were tasked with identifying BE-related neoplasia and rating their confidence with and without AI on a scale from 0 to 9. Results The utilization of AI in Arm A (without AI first, with AI second) significantly elevated confidence levels for experts and non-experts (7.1 to 8.0 and 6.1 to 6.6, respectively). Only non-experts benefitted from AI with a significant increase in accuracy (68.6% to 75.5%). Interestingly, while the confidence levels of experts without AI were higher than those of non-experts with AI, there was no significant difference in accuracy between these two groups (71.3% vs. 75.5%). In Arm B (with AI first, without AI second), experts and non-experts experienced a significant reduction in confidence (7.6 to 7.1 and 6.4 to 6.2, respectively), while maintaining consistent accuracy levels (71.8% to 71.8% and 67.5% to 67.1%, respectively). Conclusions AI significantly enhanced confidence levels for both expert and non-expert endoscopists. Endoscopists felt significantly more uncertain in their assessments without AI. Furthermore, experts with or without AI consistently displayed higher confidence levels than non-experts with AI, irrespective of comparable outcomes. These findings underscore the possible role of AI in improving diagnostic confidence during endoscopic assessment. Y1 - 2024 U6 - https://doi.org/10.1055/s-0044-1782859 SN - 1438-8812 VL - 56 IS - S 02 SP - 79 PB - Georg Thieme Verlag ER - TY - GEN A1 - Zellmer, Stephan A1 - Rauber, David A1 - Probst, Andreas A1 - Weber, Tobias A1 - Braun, Georg A1 - Nagl, Sandra A1 - Römmele, Christoph A1 - Schnoy, Elisabeth A1 - Birzle, Lisa A1 - Aehling, Niklas A1 - Schulz, Dominik Andreas Helmut Otto A1 - Palm, Christoph A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - Künstliche Intelligenz als Hilfsmittel zur Detektion der Papilla duodeni major und des papillären Ostiums während der ERCP T2 - Zeitschrift für Gastroenterologie N2 - Einleitung  Die Endoskopische Retrograde Cholangiopankreatikographie (ERCP) ist der Goldstandard in der endoskopischen Therapie von Erkrankungen des pankreatobiliären Trakts. Allerdings ist sie technisch anspruchsvoll, schwer zu erlernen und mit einer relativ hohen Komplikationsrate assoziiert. Daher soll in der vorliegenden Machbarkeitsstudie geprüft werden, ob mithilfe eines Deeplearning- Algorithmus die Papille und das Ostium zuverlässig detektiert werden können und dieser für Endoskopiker, insbesondere in der Ausbildungssituation, ein geeignetes Hilfsmittel darstellen könnte. Material und Methodik Insgesamt wurden 1534 ERCP-Bilder von 134 Patienten analysiert, wobei sowohl die Papilla duodeni major als auch das Ostium segmentiert wurden. Anschließend erfolgte das Training eines neuronalen Netzes unter Verwendung eines Deep-Learning-Algorithmus. Für den Test des Algorithmus erfolgte eine fünffache Kreuzvalidierung. Ergebnisse  Auf den 1534 gelabelten Bildern wurden für die Klasse Papille ein F1-Wert von 0,7996, eine Sensitivität von 0,8488 und eine Spezifität von 0,9822 erzielt. Für die Klasse Ostium ergaben sich ein F1-Wert von 0,5198, eine Sensitivität von 0,5945 und eine Spezifität von 0,9974. Klassenübergreifend (Klasse Papille und Klasse Ostium) betrug der F1-Wert 0,6593, die Sensitivität 0,7216 und für die Spezifität 0,9898. Zusammenfassung  In der vorliegenden Machbarkeitsstudie zeigte das neuronale Netz eine hohe Sensitivität und eine sehr hohe Spezifität bei der Identifikation der Papilla duodeni major. Die Detektion des Ostiums erfolgte hingegen mit einer deutlich geringeren Sensitivität. Zukünftig ist eine Erweiterung des Trainingsdatensatzes um Videos und klinische Daten vorgesehen, um die Leistungsfähigkeit des Netzwerks zu verbessern. Hierdurch könnte langfristig ein geeignetes Assistenzsystem für die ERCP, insbesondere in der Ausbildungssituation etabliert werden. Y1 - 2025 U6 - https://doi.org/10.1055/s-0045-1806882 VL - 63 IS - 5 SP - e295 PB - Thieme CY - Stuttgart ER - TY - GEN A1 - Bauer, Dagmar A1 - Stoffels, Gabriele A1 - Pauleit, Dirk A1 - Palm, Christoph A1 - Hamacher, Kurt A1 - Coenen, Heinz H. A1 - Langen, Karl T1 - Uptake of F-18-fluoroethyl-L-tyrosine and H-3-L-methionine in focal cortical ischemia T2 - The Journal of Nuclear Medicine N2 - Objectives: C-11-methionine (MET) is particularly useful in brain tumor diagnosis but unspecific uptake e.g. in cerebral ischemia has been reported (1). The F-18-labeled amino acid O-(2-[F-18]fluoroethyl)-L-tyrosine (FET) shows a similar clinical potential as MET in brain tumor diagnosis but is applicable on a wider clinical scale. The aim of this study was to evaluate the uptake of FET and H-3-MET in focal cortical ischemia in rats by dual tracer autoradiography. Methods: Focal cortical ischemia was induced in 12 Fisher CDF rats using the photothrombosis model (PT). One day (n=3) , two days (n=5) and 7 days (n=4) after induction of the lesion FET and H-3-MET were injected intravenously. One hour after tracer injection animals were killed, the brains were removed immediately and frozen in 2-methylbutane at -50°C. Brains were cut in coronal sections (thickness: 20 µm) and exposed first to H-3 insensitive photoimager plates to measure FET distribution. After decay of F-18 the distribution of H-3-MET was determined. The autoradiograms were evaluated by regions of interest (ROIs) placed on areas with increased tracer uptake in the PT and the contralateral brain. Lesion to brain ratios (L/B) were calculated by dividing the mean uptake in the lesion and the brain. Based on previous studies in gliomas a L/B ratio > 1.6 was considered as pathological for FET. Results: Variable increased uptake of both tracers was observed in the PT and its demarcation zone at all stages after PT. The cut-off level of 1.6 for FET was exceeded in 9/12 animals. One day after PT the L/B ratios were 2.0 ± 0.6 for FET vs. 2.1 ± 1.0 for MET (mean ± SD); two days after lesion 2.2 ± 0.7 for FET vs. 2.7 ± 1.0 for MET and 7 days after lesion 2.4 ± 0.4 for FET vs. 2.4 ± 0.1 for MET. In single cases discrepancies in the uptake pattern of FET and MET were observed. Conclusions: FET like MET may exhibit significant uptake in infarcted areas or the immediate vincinity which has to be considered in the differential diagnosis of unkown brain lesions. The discrepancies in the uptake pattern of FET and MET in some cases indicates either differences in the transport mechanisms of both amino acids or a different affinity for certain cellular components. Y1 - 2006 UR - http://jnm.snmjournals.org/content/47/suppl_1/284P.3 VL - 47 IS - Suppl. 1 SP - 284P ER - TY - GEN A1 - Beyer, Thomas A1 - Weigert, Markus A1 - Palm, Christoph A1 - Quick, Harald H. A1 - Müller, Stefan P. A1 - Pietrzyk, Uwe A1 - Vogt, Florian A1 - Martinez, M.J. A1 - Bockisch, Andreas T1 - Towards MR-based attenuation correction for whole-body PET/MR imaging T2 - The Journal of Nuclear Medicine KW - Kernspintomografie KW - Positronen-Emissions-Tomografie KW - Bildgebendes Verfahren KW - Schwächung Y1 - 2006 UR - http://jnm.snmjournals.org/content/47/suppl_1/384P.1.abstract VL - 47 IS - Suppl. 1 SP - 384P ER -