TY - JOUR A1 - Ebigbo, Alanna A1 - Mendel, Robert A1 - Scheppach, Markus W. A1 - Probst, Andreas A1 - Shahidi, Neal A1 - Prinz, Friederike A1 - Fleischmann, Carola A1 - Römmele, Christoph A1 - Gölder, Stefan Karl A1 - Braun, Georg A1 - Rauber, David A1 - Rückert, Tobias A1 - Souza Jr., Luis Antonio de A1 - Papa, João Paulo A1 - Byrne, Michael F. A1 - Palm, Christoph A1 - Messmann, Helmut T1 - Vessel and tissue recognition during third-space endoscopy using a deep learning algorithm JF - Gut N2 - In this study, we aimed to develop an artificial intelligence clinical decision support solution to mitigate operator-dependent limitations during complex endoscopic procedures such as endoscopic submucosal dissection and peroral endoscopic myotomy, for example, bleeding and perforation. A DeepLabv3-based model was trained to delineate vessels, tissue structures and instruments on endoscopic still images from such procedures. The mean cross-validated Intersection over Union and Dice Score were 63% and 76%, respectively. Applied to standardised video clips from third-space endoscopic procedures, the algorithm showed a mean vessel detection rate of 85% with a false-positive rate of 0.75/min. These performance statistics suggest a potential clinical benefit for procedure safety, time and also training. KW - Artificial Intelligence KW - Endoscopy KW - Medical Image Computing Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:898-opus4-54293 VL - 71 IS - 12 SP - 2388 EP - 2390 PB - BMJ CY - London ER - TY - JOUR A1 - Ott, Tankred A1 - Palm, Christoph A1 - Vogt, Robert A1 - Oberprieler, Christoph T1 - GinJinn: An object-detection pipeline for automated feature extraction from herbarium specimens JF - Applications in Plant Sciences N2 - PREMISE: The generation of morphological data in evolutionary, taxonomic, and ecological studies of plants using herbarium material has traditionally been a labor-intensive task. Recent progress in machine learning using deep artificial neural networks (deep learning) for image classification and object detection has facilitated the establishment of a pipeline for the automatic recognition and extraction of relevant structures in images of herbarium specimens. METHODS AND RESULTS: We implemented an extendable pipeline based on state-of-the-art deep-learning object-detection methods to collect leaf images from herbarium specimens of two species of the genus Leucanthemum. Using 183 specimens as the training data set, our pipeline extracted one or more intact leaves in 95% of the 61 test images. CONCLUSIONS: We establish GinJinn as a deep-learning object-detection tool for the automatic recognition and extraction of individual leaves or other structures from herbarium specimens. Our pipeline offers greater flexibility and a lower entrance barrier than previous image-processing approaches based on hand-crafted features. KW - Deep Learning KW - herbarium specimens KW - object detection KW - visual recognition KW - Deep Learning KW - Objekterkennung KW - Maschinelles Sehen KW - Pflanzen Y1 - 2020 U6 - https://doi.org/10.1002/aps3.11351 SN - 2168-0450 VL - 8 IS - 6 SP - e11351 PB - Wiley, Botanical Society of America ER - TY - JOUR A1 - Meinikheim, Michael A1 - Mendel, Robert A1 - Palm, Christoph A1 - Probst, Andreas A1 - Muzalyova, Anna A1 - Scheppach, Markus W. A1 - Nagl, Sandra A1 - Schnoy, Elisabeth A1 - Römmele, Christoph A1 - Schulz, Dominik Andreas Helmut Otto A1 - Schlottmann, Jakob A1 - Prinz, Friederike A1 - Rauber, David A1 - Rückert, Tobias A1 - Matsumura, Tomoaki A1 - Fernández-Esparrach, Glòria A1 - Parsa, Nasim A1 - Byrne, Michael F. A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - Influence of artificial intelligence on the diagnostic performance of endoscopists in the assessment of Barrett’s esophagus: a tandem randomized and video trial JF - Endoscopy N2 - Background This study evaluated the effect of an artificial intelligence (AI)-based clinical decision support system on the performance and diagnostic confidence of endoscopists in their assessment of Barrett’s esophagus (BE). Methods 96 standardized endoscopy videos were assessed by 22 endoscopists with varying degrees of BE experience from 12 centers. Assessment was randomized into two video sets: group A (review first without AI and second with AI) and group B (review first with AI and second without AI). Endoscopists were required to evaluate each video for the presence of Barrett’s esophagus-related neoplasia (BERN) and then decide on a spot for a targeted biopsy. After the second assessment, they were allowed to change their clinical decision and confidence level. Results AI had a stand-alone sensitivity, specificity, and accuracy of 92.2%, 68.9%, and 81.3%, respectively. Without AI, BE experts had an overall sensitivity, specificity, and accuracy of 83.3%, 58.1%, and 71.5%, respectively. With AI, BE nonexperts showed a significant improvement in sensitivity and specificity when videos were assessed a second time with AI (sensitivity 69.8% [95%CI 65.2%–74.2%] to 78.0% [95%CI 74.0%–82.0%]; specificity 67.3% [95%CI 62.5%–72.2%] to 72.7% [95%CI 68.2%–77.3%]). In addition, the diagnostic confidence of BE nonexperts improved significantly with AI. Conclusion BE nonexperts benefitted significantly from additional AI. BE experts and nonexperts remained significantly below the stand-alone performance of AI, suggesting that there may be other factors influencing endoscopists’ decisions to follow or discard AI advice. KW - Artificial Intelligence KW - Endoscopy KW - Medical Image Computing Y1 - 2024 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:898-opus4-72818 VL - 56 SP - 641 EP - 649 PB - Georg Thieme Verlag CY - Stuttgart ER - TY - GEN A1 - Ebigbo, Alanna A1 - Rauber, David A1 - Ayoub, Mousa A1 - Birzle, Lisa A1 - Matsumura, Tomoaki A1 - Probst, Andreas A1 - Steinbrück, Ingo A1 - Nagl, Sandra A1 - Römmele, Christoph A1 - Meinikheim, Michael A1 - Scheppach, Markus W. A1 - Palm, Christoph A1 - Messmann, Helmut T1 - Early Esophageal Cancer and the Generalizability of Artificial Intelligence T2 - Endoscopy N2 - Aims Artificial Intelligence (AI) systems in gastrointestinal endoscopy are narrow because they are trained to solve only one specific task. Unlike Narrow-AI, general AI systems may be able to solve multiple and unrelated tasks. We aimed to understand whether an AI system trained to detect, characterize, and segment early Barrett’s neoplasia (Barrett’s AI) is only capable of detecting this pathology or can also detect and segment other diseases like early squamous cell cancer (SCC). Methods 120 white light (WL) and narrow-band endoscopic images (NBI) from 60 patients (1 WL and 1 NBI image per patient) were extracted from the endoscopic database of the University Hospital Augsburg. Images were annotated by three expert endoscopists with extensive experience in the diagnosis and endoscopic resection of early esophageal neoplasias. An AI system based on DeepLabV3+architecture dedicated to early Barrett’s neoplasia was tested on these images. The AI system was neither trained with SCC images nor had it seen the test images prior to evaluation. The overlap between the three expert annotations („expert-agreement“) was the ground truth for evaluating AI performance. Results Barrett’s AI detected early SCC with a mean intersection over reference (IoR) of 92% when at least 1 pixel of the AI prediction overlapped with the expert-agreement. When the threshold was increased to 5%, 10%, and 20% overlap with the expert-agreement, the IoR was 88%, 85% and 82%, respectively. The mean Intersection Over Union (IoU) – a metric according to segmentation quality between the AI prediction and the expert-agreement – was 0.45. The mean expert IoU as a measure of agreement between the three experts was 0.60. Conclusions In the context of this pilot study, the predictions of SCC by a Barrett’s dedicated AI showed some overlap to the expert-agreement. Therefore, features learned from Barrett’s cancer-related training might be helpful also for SCC prediction. Our results allow different possible explanations. On the one hand, some Barrett’s cancer features generalize toward the related task of assessing early SCC. On the other hand, the Barrett’s AI is less specific to Barrett’s cancer than a general predictor of pathological tissue. However, we expect to enhance the detection quality significantly by extending the training to SCC-specific data. The insight of this study opens the way towards a transfer learning approach for more efficient training of AI to solve tasks in other domains. Y1 - 2024 U6 - https://doi.org/10.1055/s-0044-1783775 VL - 56 IS - S 02 SP - S428 PB - Thieme CY - Stuttgart ER - TY - GEN A1 - Scheppach, Markus W. A1 - Mendel, Robert A1 - Rauber, David A1 - Probst, Andreas A1 - Nagl, Sandra A1 - Römmele, Christoph A1 - Meinikheim, Michael A1 - Palm, Christoph A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - Artificial Intelligence (AI) improves endoscopists’ vessel detection during endoscopic submucosal dissection (ESD) T2 - Endoscopy N2 - Aims While AI has been successfully implemented in detecting and characterizing colonic polyps, its role in therapeutic endoscopy remains to be elucidated. Especially third space endoscopy procedures like ESD and peroral endoscopic myotomy (POEM) pose a technical challenge and the risk of operator-dependent complications like intraprocedural bleeding and perforation. Therefore, we aimed at developing an AI-algorithm for intraprocedural real time vessel detection during ESD and POEM. Methods A training dataset consisting of 5470 annotated still images from 59 full-length videos (47 ESD, 12 POEM) and 179681 unlabeled images was used to train a DeepLabV3+neural network with the ECMT semi-supervised learning method. Evaluation for vessel detection rate (VDR) and time (VDT) of 19 endoscopists with and without AI-support was performed using a testing dataset of 101 standardized video clips with 200 predefined blood vessels. Endoscopists were stratified into trainees and experts in third space endoscopy. Results The AI algorithm had a mean VDR of 93.5% and a median VDT of 0.32 seconds. AI support was associated with a statistically significant increase in VDR from 54.9% to 73.0% and from 59.0% to 74.1% for trainees and experts, respectively. VDT significantly decreased from 7.21 sec to 5.09 sec for trainees and from 6.10 sec to 5.38 sec for experts in the AI-support group. False positive (FP) readings occurred in 4.5% of frames. FP structures were detected significantly shorter than true positives (0.71 sec vs. 5.99 sec). Conclusions AI improved VDR and VDT of trainees and experts in third space endoscopy and may reduce performance variability during training. Further research is needed to evaluate the clinical impact of this new technology. Y1 - 2024 U6 - https://doi.org/10.1055/s-0044-1782891 VL - 56 IS - S 02 SP - S93 PB - Thieme CY - Stuttgart ER - TY - GEN A1 - Zellmer, Stephan A1 - Rauber, David A1 - Probst, Andreas A1 - Weber, Tobias A1 - Braun, Georg A1 - Römmele, Christoph A1 - Nagl, Sandra A1 - Schnoy, Elisabeth A1 - Messmann, Helmut A1 - Ebigbo, Alanna A1 - Palm, Christoph T1 - Artificial intelligence as a tool in the detection of the papillary ostium during ERCP T2 - Endoscopy N2 - Aims Endoscopic retrograde cholangiopancreaticography (ERCP) is the gold standard in the diagnosis as well as treatment of diseases of the pancreatobiliary tract. However, it is technically complex and has a relatively high complication rate. In particular, cannulation of the papillary ostium remains challenging. The aim of this study is to examine whether a deep-learning algorithm can be used to detect the major duodenal papilla and in particular the papillary ostium reliably and could therefore be a valuable tool for inexperienced endoscopists, particularly in training situation. Methods We analyzed a total of 654 retrospectively collected images of 85 patients. Both the major duodenal papilla and the ostium were then segmented. Afterwards, a neural network was trained using a deep-learning algorithm. A 5-fold cross-validation was performed. Subsequently, we ran the algorithm on 5 prospectively collected videos of ERCPs. Results 5-fold cross-validation on the 654 labeled data resulted in an F1 value of 0.8007, a sensitivity of 0.8409 and a specificity of 0.9757 for the class papilla, and an F1 value of 0.5724, a sensitivity of 0.5456 and a specificity of 0.9966 for the class ostium. Regardless of the class, the average F1 value (class papilla and class ostium) was 0.6866, the sensitivity 0.6933 and the specificity 0.9861. In 100% of cases the AI-detected localization of the papillary ostium in the prospectively collected videos corresponded to the localization of the cannulation performed by the endoscopist. Conclusions In the present study, the neural network was able to identify the major duodenal papilla with a high sensitivity and high specificity. In detecting the papillary ostium, the sensitivity was notably lower. However, when used on videos, the AI was able to identify the location of the subsequent cannulation with 100% accuracy. In the future, the neural network will be trained with more data. Thus, a suitable tool for ERCP could be established, especially in the training situation. Y1 - 2024 U6 - https://doi.org/10.1055/s-0044-1783138 VL - 56 IS - S 02 SP - S198 PB - Thieme CY - Stuttgart ER - TY - GEN A1 - Scheppach, Markus W. A1 - Nunes, Danilo Weber A1 - Arizi, X. A1 - Rauber, David A1 - Probst, Andreas A1 - Nagl, Sandra A1 - Römmele, Christoph A1 - Meinikheim, Michael A1 - Palm, Christoph A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - Procedural phase recognition in endoscopic submucosal dissection (ESD) using artificial intelligence (AI) T2 - Endoscopy N2 - Aims Recent evidence suggests the possibility of intraprocedural phase recognition in surgical operations as well as endoscopic interventions such as peroral endoscopic myotomy and endoscopic submucosal dissection (ESD) by AI-algorithms. The intricate measurement of intraprocedural phase distribution may deepen the understanding of the procedure. Furthermore, real-time quality assessment as well as automation of reporting may become possible. Therefore, we aimed to develop an AI-algorithm for intraprocedural phase recognition during ESD. Methods A training dataset of 364385 single images from 9 full-length ESD videos was compiled. Each frame was classified into one procedural phase. Phases included scope manipulation, marking, injection, application of electrical current and bleeding. Allocation of each frame was only possible to one category. This training dataset was used to train a Video Swin transformer to recognize the phases. Temporal information was included via logarithmic frame sampling. Validation was performed using two separate ESD videos with 29801 single frames. Results The validation yielded sensitivities of 97.81%, 97.83%, 95.53%, 85.01% and 87.55% for scope manipulation, marking, injection, electric application and bleeding, respectively. Specificities of 77.78%, 90.91%, 95.91%, 93.65% and 84.76% were measured for the same parameters. Conclusions The developed algorithm was able to classify full-length ESD videos on a frame-by-frame basis into the predefined classes with high sensitivities and specificities. Future research will aim at the development of quality metrics based on single-operator phase distribution. Y1 - 2024 U6 - https://doi.org/10.1055/s-0044-1783804 VL - 56 IS - S 02 SP - S439 PB - Thieme CY - Stuttgart ER - TY - GEN A1 - Scheppach, Markus W. A1 - Rauber, David A1 - Stallhofer, Johannes A1 - Muzalyova, Anna A1 - Otten, Vera A1 - Manzeneder, Carolin A1 - Schwamberger, Tanja A1 - Wanzl, Julia A1 - Schlottmann, Jakob A1 - Tadic, Vidan A1 - Probst, Andreas A1 - Schnoy, Elisabeth A1 - Römmele, Christoph A1 - Fleischmann, Carola A1 - Meinikheim, Michael A1 - Miller, Silvia A1 - Märkl, Bruno A1 - Palm, Christoph A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - Performance comparison of a deep learning algorithm with endoscopists in the detection of duodenal villous atrophy (VA) T2 - Endoscopy N2 - Aims  VA is an endoscopic finding of celiac disease (CD), which can easily be missed if pretest probability is low. In this study, we aimed to develop an artificial intelligence (AI) algorithm for the detection of villous atrophy on endoscopic images. Methods 858 images from 182 patients with VA and 846 images from 323 patients with normal duodenal mucosa were used for training and internal validation of an AI algorithm (ResNet18). A separate dataset was used for external validation, as well as determination of detection performance of experts, trainees and trainees with AI support. According to the AI consultation distribution, images were stratified into “easy” and “difficult”. Results Internal validation showed 82%, 85% and 84% for sensitivity, specificity and accuracy. External validation showed 90%, 76% and 84%. The algorithm was significantly more sensitive and accurate than trainees, trainees with AI support and experts in endoscopy. AI support in trainees was associated with significantly improved performance. While all endoscopists showed significantly lower detection for “difficult” images, AI performance remained stable. Conclusions The algorithm outperformed trainees and experts in sensitivity and accuracy for VA detection. The significant improvement with AI support suggests a potential clinical benefit. Stable performance of the algorithm in “easy” and “difficult” test images may indicate an advantage in macroscopically challenging cases. Y1 - 2023 U6 - https://doi.org/10.1055/s-0043-1765421 VL - 55 IS - S02 PB - Thieme ER - TY - GEN A1 - Scheppach, Markus W. A1 - Weber Nunes, Danilo A1 - Arizi, X. A1 - Rauber, David A1 - Probst, Andreas A1 - Nagl, Sandra A1 - Römmele, Christoph A1 - Palm, Christoph A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - Single frame workflow recognition during endoscopic submucosal dissection (ESD) using artificial intelligence (AI) T2 - Endoscopy N2 - Aims  Precise surgical phase recognition and evaluation may improve our understanding of complex endoscopic procedures. Furthermore, quality control measurements and endoscopy training could benefit from objective descriptions of surgical phase distributions. Therefore, we aimed to develop an artificial intelligence algorithm for frame-by-frame operational phase recognition during endoscopic submucosal dissection (ESD). Methods  Full length ESD-videos from 31 patients comprising 6.297.782 single images were collected retrospectively. Videos were annotated on a frame-by-frame basis for the operational macro-phases diagnostics, marking, injection, dissection and bleeding. Further subphases were the application of electrical current, visible injection of fluid into the submucosal space and scope manipulation, leading to 11 phases in total. 4.975.699 frames (21 patients) were used for training of a video swin transformer using uniform frame sampling for temporal information. Hyperparameter tuning was performed with 897.325 further frames (6 patients), while 424.758 frames (4 patients) were used for validation. Results  The overall F1 scores on the test dataset for the macro-phases and all 11 phases were 0.96 and 0.90, respectively. The recall values for diagnostics, marking, injection, dissection and bleeding were 1.00, 1.00, 0.95, 0.96 and 0.93, respectively. Conclusions  The algorithm classified operational phases during ESD with high accuracy. A precise evaluation of phase distribution may allow for the development of objective quality metrics for quality control and training. Y1 - 2025 U6 - https://doi.org/10.1055/s-0045-1806324 VL - 57 IS - S 02 SP - S511 PB - Thieme CY - Stuttgart ER - TY - GEN A1 - Roser, David A1 - Meinikheim, Michael A1 - Mendel, Robert A1 - Palm, Christoph A1 - Probst, Andreas A1 - Muzalyova, Anna A1 - Scheppach, Markus W. A1 - Nagl, Sandra A1 - Schnoy, Elisabeth A1 - Römmele, Christoph A1 - Schulz, Dominik Andreas Helmut Otto A1 - Schlottmann, Jakob A1 - Prinz, Friederike A1 - Rauber, David A1 - Rückert, Tobias A1 - Matsumura, Tomoaki A1 - Fernandez-Esparrach, G. A1 - Parsa, Nasim A1 - Byrne, Michael F. A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - Human-Computer Interaction: Impact of Artificial Intelligence on the diagnostic confidence of endoscopists assessing videos of Barrett’s esophagus T2 - Endoscopy N2 - Aims Human-computer interactions (HCI) may have a relevant impact on the performance of Artificial Intelligence (AI). Studies show that although endoscopists assessing Barrett’s esophagus (BE) with AI improve their performance significantly, they do not achieve the level of the stand-alone performance of AI. One aspect of HCI is the impact of AI on the degree of certainty and confidence displayed by the endoscopist. Indirectly, diagnostic confidence when using AI may be linked to trust and acceptance of AI. In a BE video study, we aimed to understand the impact of AI on the diagnostic confidence of endoscopists and the possible correlation with diagnostic performance. Methods 22 endoscopists from 12 centers with varying levels of BE experience reviewed ninety-six standardized endoscopy videos. Endoscopists were categorized into experts and non-experts and randomly assigned to assess the videos with and without AI. Participants were randomized in two arms: Arm A assessed videos first without AI and then with AI, while Arm B assessed videos in the opposite order. Evaluators were tasked with identifying BE-related neoplasia and rating their confidence with and without AI on a scale from 0 to 9. Results The utilization of AI in Arm A (without AI first, with AI second) significantly elevated confidence levels for experts and non-experts (7.1 to 8.0 and 6.1 to 6.6, respectively). Only non-experts benefitted from AI with a significant increase in accuracy (68.6% to 75.5%). Interestingly, while the confidence levels of experts without AI were higher than those of non-experts with AI, there was no significant difference in accuracy between these two groups (71.3% vs. 75.5%). In Arm B (with AI first, without AI second), experts and non-experts experienced a significant reduction in confidence (7.6 to 7.1 and 6.4 to 6.2, respectively), while maintaining consistent accuracy levels (71.8% to 71.8% and 67.5% to 67.1%, respectively). Conclusions AI significantly enhanced confidence levels for both expert and non-expert endoscopists. Endoscopists felt significantly more uncertain in their assessments without AI. Furthermore, experts with or without AI consistently displayed higher confidence levels than non-experts with AI, irrespective of comparable outcomes. These findings underscore the possible role of AI in improving diagnostic confidence during endoscopic assessment. Y1 - 2024 U6 - https://doi.org/10.1055/s-0044-1782859 SN - 1438-8812 VL - 56 IS - S 02 SP - 79 PB - Georg Thieme Verlag ER - TY - GEN A1 - Zellmer, Stephan A1 - Rauber, David A1 - Probst, Andreas A1 - Weber, Tobias A1 - Braun, Georg A1 - Nagl, Sandra A1 - Römmele, Christoph A1 - Schnoy, Elisabeth A1 - Birzle, Lisa A1 - Aehling, Niklas A1 - Schulz, Dominik Andreas Helmut Otto A1 - Palm, Christoph A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - Künstliche Intelligenz als Hilfsmittel zur Detektion der Papilla duodeni major und des papillären Ostiums während der ERCP T2 - Zeitschrift für Gastroenterologie N2 - Einleitung  Die Endoskopische Retrograde Cholangiopankreatikographie (ERCP) ist der Goldstandard in der endoskopischen Therapie von Erkrankungen des pankreatobiliären Trakts. Allerdings ist sie technisch anspruchsvoll, schwer zu erlernen und mit einer relativ hohen Komplikationsrate assoziiert. Daher soll in der vorliegenden Machbarkeitsstudie geprüft werden, ob mithilfe eines Deeplearning- Algorithmus die Papille und das Ostium zuverlässig detektiert werden können und dieser für Endoskopiker, insbesondere in der Ausbildungssituation, ein geeignetes Hilfsmittel darstellen könnte. Material und Methodik Insgesamt wurden 1534 ERCP-Bilder von 134 Patienten analysiert, wobei sowohl die Papilla duodeni major als auch das Ostium segmentiert wurden. Anschließend erfolgte das Training eines neuronalen Netzes unter Verwendung eines Deep-Learning-Algorithmus. Für den Test des Algorithmus erfolgte eine fünffache Kreuzvalidierung. Ergebnisse  Auf den 1534 gelabelten Bildern wurden für die Klasse Papille ein F1-Wert von 0,7996, eine Sensitivität von 0,8488 und eine Spezifität von 0,9822 erzielt. Für die Klasse Ostium ergaben sich ein F1-Wert von 0,5198, eine Sensitivität von 0,5945 und eine Spezifität von 0,9974. Klassenübergreifend (Klasse Papille und Klasse Ostium) betrug der F1-Wert 0,6593, die Sensitivität 0,7216 und für die Spezifität 0,9898. Zusammenfassung  In der vorliegenden Machbarkeitsstudie zeigte das neuronale Netz eine hohe Sensitivität und eine sehr hohe Spezifität bei der Identifikation der Papilla duodeni major. Die Detektion des Ostiums erfolgte hingegen mit einer deutlich geringeren Sensitivität. Zukünftig ist eine Erweiterung des Trainingsdatensatzes um Videos und klinische Daten vorgesehen, um die Leistungsfähigkeit des Netzwerks zu verbessern. Hierdurch könnte langfristig ein geeignetes Assistenzsystem für die ERCP, insbesondere in der Ausbildungssituation etabliert werden. Y1 - 2025 U6 - https://doi.org/10.1055/s-0045-1806882 VL - 63 IS - 5 SP - e295 PB - Thieme CY - Stuttgart ER - TY - JOUR A1 - Römmele, Christoph A1 - Mendel, Robert A1 - Barrett, Caroline A1 - Kiesl, Hans A1 - Rauber, David A1 - Rückert, Tobias A1 - Kraus, Lisa A1 - Heinkele, Jakob A1 - Dhillon, Christine A1 - Grosser, Bianca A1 - Prinz, Friederike A1 - Wanzl, Julia A1 - Fleischmann, Carola A1 - Nagl, Sandra A1 - Schnoy, Elisabeth A1 - Schlottmann, Jakob A1 - Dellon, Evan S. A1 - Messmann, Helmut A1 - Palm, Christoph A1 - Ebigbo, Alanna T1 - An artificial intelligence algorithm is highly accurate for detecting endoscopic features of eosinophilic esophagitis JF - Scientific Reports N2 - The endoscopic features associated with eosinophilic esophagitis (EoE) may be missed during routine endoscopy. We aimed to develop and evaluate an Artificial Intelligence (AI) algorithm for detecting and quantifying the endoscopic features of EoE in white light images, supplemented by the EoE Endoscopic Reference Score (EREFS). An AI algorithm (AI-EoE) was constructed and trained to differentiate between EoE and normal esophagus using endoscopic white light images extracted from the database of the University Hospital Augsburg. In addition to binary classification, a second algorithm was trained with specific auxiliary branches for each EREFS feature (AI-EoE-EREFS). The AI algorithms were evaluated on an external data set from the University of North Carolina, Chapel Hill (UNC), and compared with the performance of human endoscopists with varying levels of experience. The overall sensitivity, specificity, and accuracy of AI-EoE were 0.93 for all measures, while the AUC was 0.986. With additional auxiliary branches for the EREFS categories, the AI algorithm (AI-EoEEREFS) performance improved to 0.96, 0.94, 0.95, and 0.992 for sensitivity, specificity, accuracy, and AUC, respectively. AI-EoE and AI-EoE-EREFS performed significantly better than endoscopy beginners and senior fellows on the same set of images. An AI algorithm can be trained to detect and quantify endoscopic features of EoE with excellent performance scores. The addition of the EREFS criteria improved the performance of the AI algorithm, which performed significantly better than endoscopists with a lower or medium experience level. KW - Artificial Intelligence KW - Smart Endoscopy KW - eosinophilic esophagitis Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:898-opus4-46928 VL - 12 PB - Nature Portfolio CY - London ER - TY - JOUR A1 - Roser, David A1 - Meinikheim, Michael A1 - Muzalyova, Anna A1 - Mendel, Robert A1 - Palm, Christoph A1 - Probst, Andreas A1 - Nagl, Sandra A1 - Scheppach, Markus W. A1 - Römmele, Christoph A1 - Schnoy, Elisabeth A1 - Parsa, Nasim A1 - Byrne, Michael F. A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - Artificial intelligence-assisted endoscopy and examiner confidence : a study on human–artificial intelligence interaction in Barrett's Esophagus (With Video) JF - DEN Open N2 - Objective Despite high stand-alone performance, studies demonstrate that artificial intelligence (AI)-supported endoscopic diagnostics often fall short in clinical applications due to human-AI interaction factors. This video-based trial on Barrett's esophagus aimed to investigate how examiner behavior, their levels of confidence, and system usability influence the diagnostic outcomes of AI-assisted endoscopy. Methods The present analysis employed data from a multicenter randomized controlled tandem video trial involving 22 endoscopists with varying degrees of expertise. Participants were tasked with evaluating a set of 96 endoscopic videos of Barrett's esophagus in two distinct rounds, with and without AI assistance. Diagnostic confidence levels were recorded, and decision changes were categorized according to the AI prediction. Additional surveys assessed user experience and system usability ratings. Results AI assistance significantly increased examiner confidence levels (p < 0.001) and accuracy. Withdrawing AI assistance decreased confidence (p < 0.001), but not accuracy. Experts consistently reported higher confidence than non-experts (p < 0.001), regardless of performance. Despite improved confidence, correct AI guidance was disregarded in 16% of all cases, and 9% of initially correct diagnoses were changed to incorrect ones. Overreliance on AI, algorithm aversion, and uncertainty in AI predictions were identified as key factors influencing outcomes. The System Usability Scale questionnaire scores indicated good to excellent usability, with non-experts scoring 73.5 and experts 85.6. Conclusions Our findings highlight the pivotal function of examiner behavior in AI-assisted endoscopy. To fully realize the benefits of AI, implementing explainable AI, improving user interfaces, and providing targeted training are essential. Addressing these factors could enhance diagnostic accuracy and confidence in clinical practice. Y1 - 2025 U6 - https://doi.org/10.1002/deo2.70150 VL - 6 IS - 1 PB - Wiley ER - TY - GEN A1 - Römmele, Christoph A1 - Mendel, Robert A1 - Rauber, David A1 - Rückert, Tobias A1 - Byrne, Michael F. A1 - Palm, Christoph A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - Endoscopic Diagnosis of Eosinophilic Esophagitis Using a deep Learning Algorithm T2 - Endoscopy N2 - Aims Eosinophilic esophagitis (EoE) is easily missed during endoscopy, either because physicians are not familiar with its endoscopic features or the morphologic changes are too subtle. In this preliminary paper, we present the first attempt to detect EoE in endoscopic white light (WL) images using a deep learning network (EoE-AI). Methods 401 WL images of eosinophilic esophagitis and 871 WL images of normal esophageal mucosa were evaluated. All images were assessed for the Endoscopic Reference score (EREFS) (edema, rings, exudates, furrows, strictures). Images with strictures were excluded. EoE was defined as the presence of at least 15 eosinophils per high power field on biopsy. A convolutional neural network based on the ResNet architecture with several five-fold cross-validation runs was used. Adding auxiliary EREFS-classification branches to the neural network allowed the inclusion of the scores as optimization criteria during training. EoE-AI was evaluated for sensitivity, specificity, and F1-score. In addition, two human endoscopists evaluated the images. Results EoE-AI showed a mean sensitivity, specificity, and F1 of 0.759, 0.976, and 0.834 respectively, averaged over the five distinct cross-validation runs. With the EREFS-augmented architecture, a mean sensitivity, specificity, and F1-score of 0.848, 0.945, and 0.861 could be demonstrated respectively. In comparison, the two human endoscopists had an average sensitivity, specificity, and F1-score of 0.718, 0.958, and 0.793. Conclusions To the best of our knowledge, this is the first application of deep learning to endoscopic images of EoE which were also assessed after augmentation with the EREFS-score. The next step is the evaluation of EoE-AI using an external dataset. We then plan to assess the EoE-AI tool on endoscopic videos, and also in real-time. This preliminary work is encouraging regarding the ability for AI to enhance physician detection of EoE, and potentially to do a true “optical biopsy” but more work is needed. KW - Eosinophilic Esophagitis KW - Endoscopy KW - Deep Learning Y1 - 2021 U6 - https://doi.org/10.1055/s-0041-1724274 VL - 53 IS - S 01 PB - Georg Thieme Verlag CY - Stuttgart ER - TY - JOUR A1 - Scheppach, Markus W. A1 - Rauber, David A1 - Stallhofer, Johannes A1 - Muzalyova, Anna A1 - Otten, Vera A1 - Manzeneder, Carolin A1 - Schwamberger, Tanja A1 - Wanzl, Julia A1 - Schlottmann, Jakob A1 - Tadic, Vidan A1 - Probst, Andreas A1 - Schnoy, Elisabeth A1 - Römmele, Christoph A1 - Fleischmann, Carola A1 - Meinikheim, Michael A1 - Miller, Silvia A1 - Märkl, Bruno A1 - Stallmach, Andreas A1 - Palm, Christoph A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - Detection of duodenal villous atrophy on endoscopic images using a deep learning algorithm JF - Gastrointestinal Endoscopy N2 - Background and aims Celiac disease with its endoscopic manifestation of villous atrophy is underdiagnosed worldwide. The application of artificial intelligence (AI) for the macroscopic detection of villous atrophy at routine esophagogastroduodenoscopy may improve diagnostic performance. Methods A dataset of 858 endoscopic images of 182 patients with villous atrophy and 846 images from 323 patients with normal duodenal mucosa was collected and used to train a ResNet 18 deep learning model to detect villous atrophy. An external data set was used to test the algorithm, in addition to six fellows and four board certified gastroenterologists. Fellows could consult the AI algorithm’s result during the test. From their consultation distribution, a stratification of test images into “easy” and “difficult” was performed and used for classified performance measurement. Results External validation of the AI algorithm yielded values of 90 %, 76 %, and 84 % for sensitivity, specificity, and accuracy, respectively. Fellows scored values of 63 %, 72 % and 67 %, while the corresponding values in experts were 72 %, 69 % and 71 %, respectively. AI consultation significantly improved all trainee performance statistics. While fellows and experts showed significantly lower performance for “difficult” images, the performance of the AI algorithm was stable. Conclusion In this study, an AI algorithm outperformed endoscopy fellows and experts in the detection of villous atrophy on endoscopic still images. AI decision support significantly improved the performance of non-expert endoscopists. The stable performance on “difficult” images suggests a further positive add-on effect in challenging cases. KW - celiac disease KW - villous atrophy KW - endoscopy detection KW - artificial intelligence Y1 - 2023 U6 - https://doi.org/10.1016/j.gie.2023.01.006 PB - Elsevier ER - TY - GEN A1 - Meinikheim, Michael A1 - Mendel, Robert A1 - Probst, Andreas A1 - Scheppach, Markus W. A1 - Schnoy, Elisabeth A1 - Nagl, Sandra A1 - Römmele, Christoph A1 - Prinz, Friederike A1 - Schlottmann, Jakob A1 - Golger, Daniela A1 - Palm, Christoph A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - AI-assisted detection and characterization of early Barrett's neoplasia: Results of an Interim analysis T2 - Endoscopy N2 - Aims  Evaluation of the add-on effect an artificial intelligence (AI) based clinical decision support system has on the performance of endoscopists with different degrees of expertise in the field of Barrett's esophagus (BE) and Barrett's esophagus-related neoplasia (BERN). Methods  The support system is based on a multi-task deep learning model trained to solve a segmentation and several classification tasks. The training approach represents an extension of the ECMT semi-supervised learning algorithm. The complete system evaluates a decision tree between estimated motion, classification, segmentation, and temporal constraints, to decide when and how the prediction is highlighted to the observer. In our current study, ninety-six video cases of patients with BE and BERN were prospectively collected and assessed by Barrett's specialists and non-specialists. All video cases were evaluated twice – with and without AI assistance. The order of appearance, either with or without AI support, was assigned randomly. Participants were asked to detect and characterize regions of dysplasia or early neoplasia within the video sequences. Results  Standalone sensitivity, specificity, and accuracy of the AI system were 92.16%, 68.89%, and 81.25%, respectively. Mean sensitivity, specificity, and accuracy of expert endoscopists without AI support were 83,33%, 58,20%, and 71,48 %, respectively. Gastroenterologists without Barrett's expertise but with AI support had a comparable performance with a mean sensitivity, specificity, and accuracy of 76,63%, 65,35%, and 71,36%, respectively. Conclusions  Non-Barrett's experts with AI support had a similar performance as experts in a video-based study. Y1 - 2023 U6 - https://doi.org/10.1055/s-0043-1765437 VL - 55 IS - S02 PB - Thieme ER - TY - JOUR A1 - Scheppach, Markus W. A1 - Mendel, Robert A1 - Muzalyova, Anna A1 - Rauber, David A1 - Probst, Andreas A1 - Nagl, Sandra A1 - Römmele, Christoph A1 - Yip, Hon Chi A1 - Lau, Louis Ho Shing A1 - Gölder, Stefan Karl A1 - Schmidt, Arthur A1 - Kouladouros, Konstantinos A1 - Abdelhafez, Mohamed A1 - Walter, Benjamin M. A1 - Meinikheim, Michael A1 - Chiu, Philip Wai Yan A1 - Palm, Christoph A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - Artificial intelligence improves submucosal vessel detection during third space endoscopy JF - Endoscopy N2 - Background and study aims: While artificial intelligence (AI) shows high potential in decision support for diagnostic gastrointestinal endoscopy, its role in therapeutic endoscopy remains unclear. Third space endoscopic procedures pose the risk of intraprocedural bleeding. Therefore, we aimed to develop an AI algorithm for intraprocedural blood vessel detection. Patients and Methods: Using a test dataset with 101 standardized video clips containing 200 predefined submucosal blood vessels, 19 endoscopists were evaluated for the vessel detection rate (VDR) and time (VDT) with and without support of an AI algorithm. Test subjects were grouped according to experience in ESD. Results: With AI support, endoscopists VDR increased from 56.4% [CI 54.1–58.6] to 72.4% [CI 70.3–74.4]. Endoscopists‘ VDT dropped from 6.7sec [CI 6.2-7.1] to 5.2sec [CI 4.8-5.7]. False positive (FP) readings appeared in 4.5% of frames and were marked significantly shorter than true positives (6.0sec [CI 5.28-6.70] vs. 0.7sec [CI 0.55-0.87]). Conclusions: AI improved the vessel detection rate and time of endoscopists during third space endoscopy. While these data need to be corroborated by clinical trials, AI may prove to be an invaluable tool for the improvement of endoscopic interventions. KW - Artificial Intelligence KW - Third Space Endoscopy Y1 - 2025 U6 - https://doi.org/10.1055/a-2534-1164 PB - Thieme CY - Stuttgart ER - TY - GEN A1 - Scheppach, Markus W. A1 - Weber Nunes, Danilo A1 - Rauber, David A1 - Arizi, X. A1 - Probst, Andreas A1 - Nagl, Sandra A1 - Römmele, Christoph A1 - Ebigbo, Alanna A1 - Palm, Christoph A1 - Messmann, Helmut T1 - Künstliche Intelligenz-basierte Erkennung von interventionellen Phasen bei der endoskopischen Submukosadissektion T2 - Zeitschrift für Gastroenterologie N2 - Einleitung: Die endoskopische Submukosadissektion (ESD) ist ein komplexes endoskopisches Verfahren, das technische Expertise erfordert. Objektive Methoden zur Analyse von interventionellen Abläufen bei ESD könnten für Qualitätssicherung und Ausbildung, wie auch eine automatische Befunderstellung von Nutzen sein. Ziele: In dieser Studie wurde ein KI-Algorithmus für die Erkennung und Klassifizierung der interventionellen Phasen der ESD entwickelt, um die technische Basis für eine standardisierte Leistungsbewertung und automatische Befunderstellung zu schaffen. Methodik: Vollständige ESD-Videoaufnahmen von 49 Patienten wurden retrospektiv zusammengestellt. Der Datensatz umfasste 6.390.151 Einzelbilder, die alle für die folgenden interventionellen Phasen annotiert wurden: Diagnostik, Markierung, Injektion, Dissektion und Hämostase. 3.973.712 Bilder (28 Patienten) wurden für das Training eines Video-Swin-Transformers genutzt. Dabei wurde temporale Information durch standardisierte BIldextraktion in festgelegten zeitlichen Abständen zum analysierten Bild inkorporiert. 2.416.439 separate Bilder (21 Patienten) wurden für eine interne Validierung genutzt. Ergebnis: Bei der internen Evaluation erreichte das System insgesamt einen F1-Wert von 0,88. Es wurden F1-Werte von 0,99, 0,89, 0,89, 0,91 und 0,52 für Diagnostik, Markierung, Injektion, Dissektion bzw. Blutungsmanagement gemessen. Die Sensitivitäten für dieselben Parameter betrugen 1,00, 0,80, 0,94, 0,89 und 0,67, die Spezifitäten lagen bei 1,00, 1,00, 0,98, 0,88 und 0,93. Positive prädiktive Werte wurden mit 0,98, 1,00, 0,85, 0,94 und 0,43 gemessen. Schlussfolgerung: In dieser vorläufigen Studie zeigte ein KI-Algorithmus eine hohe Leistungsfähigkeit für die Einzelbild-Erkennung von Verfahrensphasen während der ESD. Die vergleichsweise niedrige Leistung für die Blutungsphase wurde auf das seltene Auftreten von Blutungsepisoden im Trainingsdatensatz zurückgeführt, der zu diesem Zeitpunkt nur Videos in voller Länge umfasste. Die zukünftige Entwicklung des Algorithmus wird sich auf die Reduzierung von Klassenungleichgewichten durch selektive Annotationsprotokolle konzentrieren. Y1 - 2025 U6 - https://doi.org/10.1055/s-0045-1811093 VL - 63 IS - 08 SP - e612 EP - e613 PB - Thieme CY - Stuttgart ER - TY - GEN A1 - Scheppach, Markus W. A1 - Rauber, David A1 - Zingler, C. A1 - Weber Nunes, Danilo A1 - Probst, Andreas A1 - Römmele, Christoph A1 - Nagl, Sandra A1 - Ebigbo, Alanna A1 - Palm, Christoph A1 - Messmann, Helmut T1 - Instrumentenerkennung während der endoskopischen Submukosadissektion mittels künstlicher Intelligenz T2 - Zeitschrift für Gastroenterologie N2 - Einleitung: Die endoskopische Submukosadissektion (ESD) ist eine komplexe Technik zur Resektion gastrointestinaler Frühneoplasien. Dabei werden für die verschiedenen Schritte der Intervention spezifische endoskopische Instrumente verwendet. Die präzise und automatische Erkennung und Abgrenzung der verwendeten Instrumente (Injektionsnadeln, elektrochirurgische Messer mit unterschiedlichen Konfigurationen, hämostatische Zangen) könnte wertvolle Informationen über den Fortschritt und die Verfahrensmerkmale der ESD liefern und eine automatische standardisierte Berichterstattung ermöglichen. Ziele: Ziel dieser Studie war die Entwicklung eines KI-Algorithmus zur Erkennung und Delineation von endoskopischen Instrumenten bei der ESD. Methodik: 17 ESD-Videos (9×rektal, 5×ösophageal, 3×gastrisch) wurden retrospektiv zusammengestellt. Auf 8530 Einzelbilder dieser Videos wurden durch 2 Studienmitarbeiter die folgenden Klassen eingezeichnet: Hakenmesser – Spitze, Hakenmesser – Katheter, Nadelmesser – Spitze und – Katheter, Injektionsnadel -Spitze und – Katheter sowie hämostatische Zange – Spitze und – Katheter. Der annotierte Datensatz wurde zum Training eines DeepLabV3+-Deep-Learning-Algorithmus mit ConvNeXt-Backbone zur Erkennung und Abgrenzung der genannten Klassen verwendet. Die Evaluation erfolgte durch 5-fache interne Kreuzvalidierung. Ergebnis: Die Validierung auf Einzelpixelbasis ergab insgesamt einen F1-Score von 0,80, eine Sensitivität von 0,81 und eine Spezifität von 1,00. Es wurden F1-Scores von 1,00, 0,97, 0,80, 0,98, 0,85, 0,97, 0,80, 0,51 bzw. 0,85 für die Klassen Hakenmesser – Katheter und – Spitze, Nadelmesser – Katheter und – Spitze, Injektionsnadel – Katheter und – Spitze, hämostatische Zange – Katheter und – Spitze gemessen. Schlussfolgerung: In dieser Studie wurden die wichtigsten endoskopischen Instrumente, die während der ESD verwendet werden, mit hoher Genauigkeit erkannt. Die geringere Leistung bei der hämostatische Zange – Katheter kann auf die Unterrepräsentation dieser Klassen in den Trainingsdaten zurückgeführt werden. Zukünftige Studien werden sich auf die Erweiterung der Instrumentenklassen sowie auf die Ausbalancierung der Trainingsdaten konzentrieren. Y1 - 2025 U6 - https://doi.org/10.1055/s-0045-1811092 VL - 63 IS - 8 PB - Thieme ER - TY - GEN A1 - Meinikheim, Michael A1 - Mendel, Robert A1 - Probst, Andreas A1 - Scheppach, Markus W. A1 - Nagl, Sandra A1 - Schnoy, Elisabeth A1 - Römmele, Christoph A1 - Prinz, Friederike A1 - Schlottmann, Jakob A1 - Messmann, Helmut A1 - Palm, Christoph A1 - Ebigbo, Alanna T1 - Einfluss von Künstlicher Intelligenz auf die Performance von niedergelassenen Gastroenterolog:innen bei der Beurteilung von Barrett-Ösophagus T2 - Zeitschrift für Gastroenterologie N2 - Einleitung  Die Differenzierung zwischen nicht dysplastischem Barrett-Ösophagus (NDBE) und mit Barrett-Ösophagus assoziierten Neoplasien (BERN) während der endoskopischen Inspektion erfordert viel Expertise. Die frühe Diagnosestellung ist wichtig für die weitere Prognose des Barrett-Karzinoms. In Deutschland werden Patient:innen mit einem Barrett-Ösophagus (BE) in der Regel im niedergelassenen Sektor überwacht. Ziele  Ziel ist es, den Einfluss von einem auf Künstlicher Intelligenz (KI) basierenden klinischen Entscheidungsunterstützungssystems (CDSS) auf die Performance von niedergelassenen Gastroenterolog:innen (NG) bei der Evaluation von Barrett-Ösophagus (BE) zu untersuchen. Methodik  Es erfolgte die prospektive Sammlung von 96 unveränderten hochauflösenden Videos mit Fällen von Patient:innen mit histologisch bestätigtem NDBE und BERN. Alle eingeschlossenen Fälle enthielten mindestens zwei der folgenden Darstellungsmethoden: HD-Weißlichtendoskopie, Narrow Band Imaging oder Texture and Color Enhancement Imaging. Sechs NG von sechs unterschiedlichen Praxen wurden als Proband:innen eingeschlossen. Es erfolgte eine permutierte Block-Randomisierung der Videofälle in entweder Gruppe A oder Gruppe B. Gruppe A implizierte eine Evaluation des Falls durch Proband:innen zunächst ohne KI und anschließend mit KI als CDSS. In Gruppe B erfolgte die Evaluation in umgekehrter Reihenfolge. Anschließend erfolgte eine zufällige Wiedergabe der so entstandenen Subgruppen im Rahmen des Tests. Ergebnis  In diesem Test konnte ein von uns entwickeltes KI-System (Barrett-Ampel) eine Sensitivität von 92,2%, eine Spezifität von 68,9% und eine Accuracy von 81,3% erreichen. Mit der Hilfe von KI verbesserte sich die Sensitivität der NG von 64,1% auf 71,2% (p<0,001) und die Accuracy von 66,3% auf 70,8% (p=0,006) signifikant. Eine signifikante Verbesserung dieser Parameter zeigte sich ebenfalls, wenn die Proband:innen die Fälle zunächst ohne KI evaluierten (Gruppe A). Wurde der Fall jedoch als Erstes mit der Hilfe von KI evaluiert (Gruppe B), blieb die Performance nahezu konstant. Schlussfolgerung  Es konnte ein performantes KI-System zur Evaluation von BE entwickelt werden. NG verbessern sich bei der Evaluation von BE durch den Einsatz von KI. KW - Barrett-Ösophagus KW - Künstliche Intelligenz Y1 - 2023 UR - https://www.thieme-connect.de/products/ejournals/abstract/10.1055/s-0043-1771711 U6 - https://doi.org/10.1055/s-0043-1771711 VL - 61 IS - 8 PB - Thieme CY - Stuttgart ER - TY - GEN A1 - Scheppach, Markus W. A1 - Mendel, Robert A1 - Muzalyova, Anna A1 - Rauber, David A1 - Probst, Andreas A1 - Nagl, Sandra A1 - Römmele, Christoph A1 - Yip, Hon Chi A1 - Lau, Louis Ho Shing A1 - Gölder, Stefan Karl A1 - Schmidt, Arthur A1 - Kouladouros, Konstantinos A1 - Abdelhafez, Mohamed A1 - Walter, B. A1 - Meinikheim, Michael A1 - Chiu, Philip Wai Yan A1 - Palm, Christoph A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - Künstliche Intelligenz erhöht die Gefäßerkennung von Endoskopikern bei third space Endoskopie T2 - Zeitschrift für Gastroenterologie N2 - Einleitung: Künstliche Intelligenz (KI)-Algorithmen unterstützen Endoskopiker bei der Erkennung und Charakterisierung von Kolonpolypen in der klinischen Praxis und führen zu einer Erhöhung der Adenomdetektionsrate. Auch bei therapeutischen Maßnahmen wie der endoskopischen Submukosadissektion (ESD) könne relevante anatomische Strukturen durch KI mit hoher Genauigkeit erkannt und im endoskopischen Bild in Echtzeit markiert werden. Der Effekt einer solchen Applikation auf die Gefäßdetektion von Endoskopikern ist bislang nicht erforscht. Ziele:  In dieser Studie wurde der Effekt eines KI-Algorithmus zur Echtzeit-Gefäßmarkierung bei ESD auf die Gefäßdetektionsrate von Endoskopikern untersucht. Methodik:  59 third space Endoskopievideos wurde aus der Datenbank des Universitätsklinikums Augsburg extrahiert. Auf 5470 Einzelbildern dieser Untersuchungen wurde submukosale Blutgefäße annotiert. Zusammen mit weiteren 179681 unmarkierten Bildern wurde ein DeepLabV3+ neuronales Netzwerk mit einer semi-supervised learning Methode darin trainiert, submukosale Blutgefäße auf dem endoskopischen Bild zu erkennen und in Echtzeit einzuzeichnen. Anhand eines Videotests mit 101 Videoclips und 200 vordefinierten Blutgefäßen wurden 19 Endoskopiker mit und ohne KI Unterstützung getestet. Ergebnis:  Der Algorithmus erkannte in dem Videotest 93.5% der Gefäße in einer Detektionszeit von im Median 0,3 Sekunden. Die Gefäßdetektionsrate von Endoskopikern erhöhte sich durch KI Unterstützung von 56,4% auf 72,4% (p<0.001). Die Gefäßdetektionszeit reduzierte sich durch KI-Unterstützung von 6,7 auf 5.2 Sekunden (p<0.001). Der Algorithmus zeigte eine Rate an falsch positiven Detektionen in 4.5% der Einzelbilder. Falsch positiv erkannte Strukturen wurde kürzer detektiert, als richtig positive (0.7 und 6.0 Sekunden, p<0.001). Schlussfolgerung:  KI Unterstützung führte zu einer erhöhten Gefäßdetektionsrate und schnelleren Gefäßdetektionszeit von Endoskopikern. Ein möglicher klinischer Effekt auf die intraprozedurale Komplikationsrate oder Operationszeit könnte in prospektiven Studien ermittelt werden. Y1 - 2024 U6 - https://doi.org/10.1055/s-0044-1790087 VL - 62 IS - 09 SP - e830 PB - Georg Thieme Verlag KG ER - TY - GEN A1 - Scheppach, Markus W. A1 - Nunes, Danilo Weber A1 - Arizi, X. A1 - Rauber, David A1 - Probst, Andreas A1 - Nagl, Sandra A1 - Römmele, Christoph A1 - Palm, Christoph A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - Intraoperative Phasenerkennung bei endoskopischer Submukosadissektion mit Hilfe von künstlicher Intelligenz T2 - Zeitschrift für Gastroenterologie N2 - Einleitung:  Künstliche Intelligenz (KI) wird in der Endoskopie des Gastrointestinaltraktes zur Erkennung und Charakterisierung von Kolonpolypen eingesetzt. Die Rolle von KI bei therapeutischen Maßnahmen wurde noch nicht eingehend untersucht. Eine intraprozedurale Phasenerkennung bei endoskopischer Submukoasdissektion (ESD) könnte die Erhebung von Qualitätsindikatoren ermöglichen. Weiterhin könnte diese Technologie zu einem tieferen Verständnis über die Eigenschaften der Prozedur führen und weiterführende Applikationen zur automatischen Dokumentation oder standardisiertem Training vorbereiten. Ziele: Ziel dieser Studie war die Entwicklung eines KI Algorithmus zur intraprozeduralen Phasenerkennung bei endoskopischer Submukosadissektion. Methodik:  2071546 Einzelbilder aus 27 ESD Videos in voller Länge wurden für die übergeordneten Klassen Diagnostik, Markierung, Nadelinjektion, Dissektion und Blutung, sowie die untergeordneten Klassen Endoskop-Manipulation, Injektion und Applikation von elektrischem Strom annotiert. Mit einem Trainingsdatensatz (898440 Einzelbilder, 17 ESDs) wurde ein Video Swin Transformer mit uniformer Stichprobenentnahme trainiert und intern validiert (769523 Einzelbilder, 6 ESDs). Neben der internen Validierung wurde der Algorithmus anhand von einem separaten Testdatensatz (403583 Einzelbilder, 4 ESDs) evaluiert. Ergebnis:  Der F1 Score des Algorithmus für alle Klassen lag in der internen Validierung bei 83%, in dem separaten Test bei 90%. Anhand des separaten Tests wurden true positive (TP)-Raten für Diagnostik, Markierung, Nadelinjektion, Dissektion und Blutung von 100%, 100%, 96%, 97% und 93% ermittelt. Für Endoskopmanipulation, Injektion und Applikation von Elektrizität lagen die TP-Raten bei 92%, 98% und 91%. Schlussfolgerung:  Der entwickelte Algorithmus klassifizierte ESD Videos in voller Länge und anhand jedes einzelnen Bildes mit hoher Genauigkeit. Zukünftige Forschungsvorhaben könnten intraoperative Qualitätsindikatioren auf Basis dieser Informationen entwickeln und eine automatisierte Dokumentation ermöglichen. Y1 - 2024 U6 - https://doi.org/10.1055/s-0044-1790084 VL - 62 IS - 09 SP - e828 PB - Georg Thieme Verlag KG ER - TY - GEN A1 - Zellmer, Stephan A1 - Rauber, David A1 - Probst, Andreas A1 - Weber, Tobias A1 - Nagl, Sandra A1 - Römmele, Christoph A1 - Schnoy, Elisabeth A1 - Palm, Christoph A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - Verwendung künstlicher Intelligenz bei der Detektion der Papilla duodeni major T2 - Zeitschrift für Gastroenterologie N2 - Einleitung Die Endoskopische Retrograde Cholangiopankreatikographie (ERCP) ist der Goldstandard in der Diagnostik und Therapie von Erkrankungen des pankreatobiliären Trakts. Jedoch ist sie technisch sehr anspruchsvoll und weist eine vergleichsweise hohe Komplikationsrate auf. Ziele  In der vorliegenden Machbarkeitsstudie soll geprüft werden, ob mithilfe eines Deep-learning-Algorithmus die Papille und das Ostium zuverlässig detektiert werden können und somit für Endoskopiker mit geringer Erfahrung ein geeignetes Hilfsmittel, insbesondere für die Ausbildungssituation, darstellen könnten. Methodik Wir betrachteten insgesamt 606 Bilddatensätze von 65 Patienten. In diesen wurde sowohl die Papilla duodeni major als auch das Ostium segmentiert. Anschließend wurde eine neuronales Netz mittels eines Deep-learning-Algorithmus trainiert. Außerdem erfolgte eine 5-fache Kreuzvaldierung. Ergebnisse Bei einer 5-fachen Kreuzvaldierung auf den 606 gelabelten Daten konnte für die Klasse Papille eine F1-Wert von 0,7908, eine Sensitivität von 0,7943 und eine Spezifität von 0,9785 erreicht werden, für die Klasse Ostium eine F1-Wert von 0,5538, eine Sensitivität von 0,5094 und eine Spezifität von 0,9970 (vgl. [Tab. 1]). Unabhängig von der Klasse zeigte sich gemittelt (Klasse Papille und Klasse Ostium) ein F1-Wert von 0,6673, eine Sensitivität von 0,6519 und eine Spezifität von 0,9877 (vgl. [Tab. 2]). Schlussfolgerung  In vorliegende Machbarkeitsstudie konnte das neuronale Netz die Papilla duodeni major mit einer hohen Sensitivität und sehr hohen Spezifität identifizieren. Bei der Detektion des Ostiums war die Sensitivität deutlich geringer. Zukünftig soll das das neuronale Netz mit mehr Daten trainiert werden. Außerdem ist geplant, den Algorithmus auch auf Videos anzuwenden. Somit könnte langfristig ein geeignetes Hilfsmittel für die ERCP etabliert werden. KW - Künstliche Intelligenz Y1 - 2023 UR - https://www.thieme-connect.de/products/ejournals/abstract/10.1055/s-0043-1772000 U6 - https://doi.org/10.1055/s-0043-1772000 VL - 61 IS - 08 SP - e539 EP - e540 PB - Thieme CY - Stuttgart ER - TY - JOUR A1 - Hartmann, Robin A1 - Nieberle, Felix A1 - Palm, Christoph A1 - Brébant, Vanessa A1 - Prantl, Lukas A1 - Kuehle, Reinald A1 - Reichert, Torsten E. A1 - Taxis, Juergen A1 - Ettl, Tobias T1 - Utility of Smartphone-based Three-dimensional Surface Imaging for Digital Facial Anthropometry JF - JPRAS Open N2 - Background The utilization of three-dimensional (3D) surface imaging for facial anthropometry is a significant asset for patients undergoing maxillofacial surgery. Notably, there have been recent advancements in smartphone technology that enable 3D surface imaging. In this study, anthropometric assessments of the face were performed using a smartphone and a sophisticated 3D surface imaging system. Methods 30 healthy volunteers (15 females and 15 males) were included in the study. An iPhone 14 Pro (Apple Inc., USA) using the application 3D Scanner App (Laan Consulting Corp., USA) and the Vectra M5 (Canfield Scientific, USA) were employed to create 3D surface models. For each participant, 19 anthropometric measurements were conducted on the 3D surface models. Subsequently, the anthropometric measurements generated by the two approaches were compared. The statistical techniques employed included the paired t-test, paired Wilcoxon signed-rank test, Bland–Altman analysis, and calculation of the intraclass correlation coefficient (ICC). Results All measurements showed excellent agreement between smartphone-based and Vectra M5-based measurements (ICC between 0.85 and 0.97). Statistical analysis revealed no statistically significant differences in the central tendencies for 17 of the 19 linear measurements. Despite the excellent agreement found, Bland–Altman analysis revealed that the 95% limits of agreement between the two methods exceeded ±3 mm for the majority of measurements. Conclusion Digital facial anthropometry using smartphones can serve as a valuable supplementary tool for surgeons, enhancing their communication with patients. However, the proposed data suggest that digital facial anthropometry using smartphones may not yet be suitable for certain diagnostic purposes that require high accuracy. KW - Three-dimensional surface imaging KW - Stereophotogrammetry KW - Smartphone-based surface imaging KW - Digital anthropometry KW - Facial anthropometry Y1 - 2024 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:898-opus4-70348 VL - 39 SP - 330 EP - 343 PB - Elsevier ER - TY - JOUR A1 - Knödler, Leonard A1 - Baecher, Helena A1 - Kauke-Navarro, Martin A1 - Prantl, Lukas A1 - Machens, Hans-Günther A1 - Scheuermann, Philipp A1 - Palm, Christoph A1 - Baumann, Raphael A1 - Kehrer, Andreas A1 - Panayi, Adriana C. A1 - Knoedler, Samuel T1 - Towards a Reliable and Rapid Automated Grading System in Facial Palsy Patients: Facial Palsy Surgery Meets Computer Science JF - Journal of Clinical Medicine N2 - Background: Reliable, time- and cost-effective, and clinician-friendly diagnostic tools are cornerstones in facial palsy (FP) patient management. Different automated FP grading systems have been developed but revealed persisting downsides such as insufficient accuracy and cost-intensive hardware. We aimed to overcome these barriers and programmed an automated grading system for FP patients utilizing the House and Brackmann scale (HBS). Methods: Image datasets of 86 patients seen at the Department of Plastic, Hand, and Reconstructive Surgery at the University Hospital Regensburg, Germany, between June 2017 and May 2021, were used to train the neural network and evaluate its accuracy. Nine facial poses per patient were analyzed by the algorithm. Results: The algorithm showed an accuracy of 100%. Oversampling did not result in altered outcomes, while the direct form displayed superior accuracy levels when compared to the modular classification form (n = 86; 100% vs. 99%). The Early Fusion technique was linked to improved accuracy outcomes in comparison to the Late Fusion and sequential method (n = 86; 100% vs. 96% vs. 97%). Conclusions: Our automated FP grading system combines high-level accuracy with cost- and time-effectiveness. Our algorithm may accelerate the grading process in FP patients and facilitate the FP surgeon’s workflow. Y1 - 2022 U6 - https://doi.org/10.3390/jcm11174998 VL - 11 IS - 17 PB - MDPI CY - Basel ER - TY - JOUR A1 - Souza Jr., Luis Antonio de A1 - Palm, Christoph A1 - Mendel, Robert A1 - Hook, Christian A1 - Ebigbo, Alanna A1 - Probst, Andreas A1 - Messmann, Helmut A1 - Weber, Silke A. T. A1 - Papa, João Paulo T1 - A survey on Barrett's esophagus analysis using machine learning JF - Computers in Biology and Medicine N2 - This work presents a systematic review concerning recent studies and technologies of machine learning for Barrett's esophagus (BE) diagnosis and treatment. The use of artificial intelligence is a brand new and promising way to evaluate such disease. We compile some works published at some well-established databases, such as Science Direct, IEEEXplore, PubMed, Plos One, Multidisciplinary Digital Publishing Institute (MDPI), Association for Computing Machinery (ACM), Springer, and Hindawi Publishing Corporation. Each selected work has been analyzed to present its objective, methodology, and results. The BE progression to dysplasia or adenocarcinoma shows a complex pattern to be detected during endoscopic surveillance. Therefore, it is valuable to assist its diagnosis and automatic identification using computer analysis. The evaluation of the BE dysplasia can be performed through manual or automated segmentation through machine learning techniques. Finally, in this survey, we reviewed recent studies focused on the automatic detection of the neoplastic region for classification purposes using machine learning methods. KW - Speiseröhrenkrankheit KW - Diagnose KW - Mustererkennung KW - Maschinelles Lernen KW - Literaturbericht KW - Barrett's esophagus KW - Machine learning KW - Adenocarcinoma KW - Image processing KW - Pattern recognition KW - Computer-aided diagnosis Y1 - 2018 U6 - https://doi.org/10.1016/j.compbiomed.2018.03.014 VL - 96 SP - 203 EP - 213 PB - Elsevier ER - TY - JOUR A1 - Ebigbo, Alanna A1 - Palm, Christoph A1 - Probst, Andreas A1 - Mendel, Robert A1 - Manzeneder, Johannes A1 - Prinz, Friederike A1 - Souza Jr., Luis Antonio de A1 - Papa, João Paulo A1 - Siersema, Peter A1 - Messmann, Helmut T1 - A technical review of artificial intelligence as applied to gastrointestinal endoscopy: clarifying the terminology JF - Endoscopy International Open N2 - The growing number of publications on the application of artificial intelligence (AI) in medicine underlines the enormous importance and potential of this emerging field of research. In gastrointestinal endoscopy, AI has been applied to all segments of the gastrointestinal tract most importantly in the detection and characterization of colorectal polyps. However, AI research has been published also in the stomach and esophagus for both neoplastic and non-neoplastic disorders. The various technical as well as medical aspects of AI, however, remain confusing especially for non-expert physicians. This physician-engineer co-authored review explains the basic technical aspects of AI and provides a comprehensive overview of recent publications on AI in gastrointestinal endoscopy. Finally, a basic insight is offered into understanding publications on AI in gastrointestinal endoscopy. KW - Diagnose KW - Maschinelles Lernen KW - Gastroenterologie KW - Künstliche Intelligenz KW - Barrett's esophagus KW - Deep learning Y1 - 2019 U6 - https://doi.org/10.1055/a-1010-5705 VL - 07 IS - 12 SP - 1616 EP - 1623 PB - Georg Thieme Verlag CY - Stuttgart ER - TY - CHAP A1 - Wöhl, Rebecca A1 - Huber, Michaela A1 - Loibl, Markus A1 - Riebschläger, Birgit A1 - Nerlich, Michael A1 - Palm, Christoph T1 - The Impact of Semi-Automated Segmentation and 3D Analysis on Testing New Osteosynthesis Material T2 - Bildverarbeitung für die Medizin 2017; Algorithmen - Systeme - Anwendungen; Proceedings des Workshops vom 12. bis 14. März 2017 in Heidelberg N2 - A new protocol for testing osteosynthesis material postoperatively combining semi-automated segmentation and 3D analysis of surface meshes is proposed. By various steps of transformation and measuring, objective data can be collected. In this study the specifications of a locking plate used for mediocarpal arthrodesis of the wrist were examined. The results show, that union of the lunate, triquetrum, hamate and capitate was achieved and that the plate is comparable to coexisting arthrodesis systems. Additionally, it was shown, that the complications detected correlate to the clinical outcome. In synopsis, this protocol is considered beneficial and should be taken into account in further studies. KW - Osteosynthese KW - Implantatwerkstoff KW - Materialprüfung KW - Bildsegmentierung KW - Dreidimensionale Bildverarbeitung Y1 - 2017 U6 - https://doi.org/10.1007/978-3-662-54345-0_30 SP - 122 EP - 127 PB - Springer CY - Berlin ER - TY - JOUR A1 - Deserno, Thomas M. A1 - Handels, Heinz A1 - Maier-Hein, Klaus H. A1 - Mersmann, Sven A1 - Palm, Christoph A1 - Tolxdorff, Thomas A1 - Wagenknecht, Gudrun A1 - Wittenberg, Thomas T1 - Viewpoints on Medical Image Processing BT - From Science to Application JF - Current Medical Imaging Reviews N2 - Medical image processing provides core innovation for medical imaging. This paper is focused on recent developments from science to applications analyzing the past fifteen years of history of the proceedings of the German annual meeting on medical image processing (BVM). Furthermore, some members of the program committee present their personal points of views: (i) multi-modality for imaging and diagnosis, (ii) analysis of diffusion-weighted imaging, (iii) model-based image analysis, (iv) registration of section images, (v) from images to information in digital endoscopy, and (vi) virtual reality and robotics. Medical imaging and medical image computing is seen as field of rapid development with clear trends to integrated applications in diagnostics, treatment planning and treatment. KW - Medical imaging KW - Image processing KW - Image analysis KW - Vizualization KW - Multi-modal imaging KW - Diffusion-weighted imaging KW - Model-based imaging KW - Digital endoscopy KW - Bildgebendes Verfahren KW - Bildverarbeitung KW - Medizin Y1 - 2013 U6 - https://doi.org/10.2174/1573405611309020002 VL - 9 IS - 2 SP - 79 EP - 88 ER - TY - JOUR A1 - Becker, Johanna Sabine A1 - Matusch, Andreas A1 - Becker, Julia Susanne A1 - Wu, Bei A1 - Palm, Christoph A1 - Becker, Albert Johann A1 - Salber, Dagmar T1 - Mass spectrometric imaging (MSI) of metals using advanced BrainMet techniques for biomedical research JF - International Journal of Mass Spectrometry N2 - Mass spectrometric imaging (MSI) is a young innovative analytical technique and combines different fields of advanced mass spectrometry and biomedical research with the aim to provide maps of elements and molecules, complexes or fragments. Especially essential metals such as zinc, copper, iron and manganese play a functional role in signaling, metabolism and homeostasis of the cell. Due to the high degree of spatial organization of metals in biological systems their distribution analysis is of key interest in life sciences. We have developed analytical techniques termed BrainMet using laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) imaging to measure the distribution of trace metals in biological tissues for biomedical research and feasibility studies—including bioaccumulation and bioavailability studies, ecological risk assessment and toxicity studies in humans and other organisms. The analytical BrainMet techniques provide quantitative images of metal distributions in brain tissue slices which can be combined with other imaging modalities such as photomicrography of native or processed tissue (histochemistry, immunostaining) and autoradiography or with in vivo techniques such as positron emission tomography or magnetic resonance tomography. Prospective and instrumental developments will be discussed concerning the development of the metalloprotein microscopy using a laser microdissection (LMD) apparatus for specific sample introduction into an inductively coupled plasma mass spectrometer (LMD-ICP-MS) or an application of the near field effect in LA-ICP-MS (NF-LA-ICP-MS). These nano-scale mass spectrometric techniques provide improved spatial resolution down to the single cell level. KW - Bioimaging KW - Brain tissue KW - Laser ablation inductively coupled plasma mass spectrometry KW - Laser microdissection inductively coupled plasma mass spectrometry KW - Metals KW - Metallomics KW - Nano-LA-ICP-MS KW - Tumour KW - Massenspektrometrie KW - Bildgebendes Verfahren KW - Metalle KW - Metallproteide KW - Gehirn Y1 - 2011 U6 - https://doi.org/10.1016/j.ijms.2011.01.015 VL - 307 IS - 1-3 SP - 3 EP - 15 PB - eLSEVIER CY - Elsevier ER - TY - JOUR A1 - Becker, Johanna Sabine A1 - Matusch, Andreas A1 - Palm, Christoph A1 - Salber, Dagmar A1 - Morton, Kathryn A. A1 - Becker, Julia Susanne T1 - Bioimaging of metals in brain tissue by laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) and metallomics JF - Metallomics N2 - Laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) has been developed and established as an emerging technique in the generation of quantitative images of metal distributions in thin tissue sections of brain samples (such as human, rat and mouse brain), with applications in research related to neurodegenerative disorders. A new analytical protocol is described which includes sample preparation by cryo-cutting of thin tissue sections and matrix-matched laboratory standards, mass spectrometric measurements, data acquisition, and quantitative analysis. Specific examples of the bioimaging of metal distributions in normal rodent brains are provided. Differences to the normal were assessed in a Parkinson’s disease and a stroke brain model. Furthermore, changes during normal aging were studied. Powerful analytical techniques are also required for the determination and characterization of metal-containing proteins within a large pool of proteins, e.g., after denaturing or non-denaturing electrophoretic separation of proteins in one-dimensional and two-dimensional gels. LA-ICP-MS can be employed to detect metalloproteins in protein bands or spots separated after gel electrophoresis. MALDI-MS can then be used to identify specific metal-containing proteins in these bands or spots. The combination of these techniques is described in the second section. KW - ICP-Massenspektrometrie KW - Metalle KW - Metallproteide KW - Elektrophorese KW - Gehirn Y1 - 2010 U6 - https://doi.org/10.1039/b916722f IS - 2 SP - 104 EP - 111 PB - Oxford Academic Press ER - TY - JOUR A1 - Osterholt, Tobias A1 - Salber, Dagmar A1 - Matusch, Andreas A1 - Becker, Johanna Sabine A1 - Palm, Christoph T1 - IMAGENA: Image Generation and Analysis BT - An Interactive Software Tool handling LA-ICP-MS Data JF - International Journal of Mass Spectrometry N2 - Metals are involved in many processes of life. They are needed for enzymatic reactions, are involved in healthy processes but also yield diseases if the metal homeostasis is disordered. Therefore, the interest to assess the spatial distribution of metals is rising in biomedical science. Imaging metal (and non-metal) isotopes by laser ablation mass spectrometry with inductively coupled plasma (LA-ICP-MS) requires a special software solution to process raw data obtained by scanning a sample line-by-line. As no software ready to use was available we developed an interactive software tool for Image Generation and Analysis (IMAGENA). Unless optimised for LA-ICP-MS, IMAGENA can handle other raw data as well. The general purpose was to reconstruct images from a continuous list of raw data points, to visualise these images, and to convert them into a commonly readable image file format that can be further analysed by standard image analysis software. The generation of the image starts with loading a text file that holds a data column of every measured isotope. Specifying general spatial domain settings like the data offset and the image dimensions is done by the user getting a direct feedback by means of a preview image. IMAGENA provides tools for calibration and to correct for a signal drift in the y-direction. Images are visualised in greyscale as well a pseudo-colours with possibilities for contrast enhancement. Image analysis is performed in terms of smoothed line plots in row and column direction. KW - LA-ICP-MS KW - ICP-Massenspektrometrie KW - Bilderzeugung KW - Graphische Benutzeroberfläche KW - Image generation KW - Image analysis KW - Graphical user interface Y1 - 2011 U6 - https://doi.org/10.1016/j.ijms.2011.03.010 VL - 307 IS - 1-3 SP - 232 EP - 239 ER - TY - JOUR A1 - Dammers, Jürgen A1 - Axer, Markus A1 - Gräßel, David A1 - Palm, Christoph A1 - Zilles, Karl A1 - Amunts, Katrin A1 - Pietrzyk, Uwe T1 - Signal enhancement in polarized light imaging by means of independent component analysis JF - NeuroImage N2 - Polarized light imaging (PLI) enables the evaluation of fiber orientations in histological sections of human postmortem brains, with ultra-high spatial resolution. PLI is based on the birefringent properties of the myelin sheath of nerve fibers. As a result, the polarization state of light propagating through a rotating polarimeter is changed in such a way that the detected signal at each measurement unit of a charged-coupled device (CCD) camera describes a sinusoidal signal. Vectors of the fiber orientation defined by inclination and direction angles can then directly be derived from the optical signals employing PLI analysis. However, noise, light scatter and filter inhomogeneities interfere with the original sinusoidal PLI signals. We here introduce a novel method using independent component analysis (ICA) to decompose the PLI images into statistically independent component maps. After decomposition, gray and white matter structures can clearly be distinguished from noise and other artifacts. The signal enhancement after artifact rejection is quantitatively evaluated in 134 histological whole brain sections. Thus, the primary sinusoidal signals from polarized light imaging can be effectively restored after noise and artifact rejection utilizing ICA. Our method therefore contributes to the analysis of nerve fiber orientation in the human brain within a micrometer scale. KW - Bildgebendes Verfahren KW - Polarisiertes Licht KW - Signalverarbeitung KW - Signaltrennung KW - Komponentenanalyse KW - Gehirn Y1 - 2010 U6 - https://doi.org/10.1016/j.neuroimage.2009.08.059 VL - 49 IS - 2 SP - 1241 EP - 1248 PB - Elsevier ER - TY - JOUR A1 - Palm, Christoph A1 - Axer, Markus A1 - Gräßel, David A1 - Dammers, Jürgen A1 - Lindemeyer, Johannes A1 - Zilles, Karl A1 - Pietrzyk, Uwe A1 - Amunts, Katrin T1 - Towards ultra-high resolution fibre tract mapping of the human brain BT - registration of polarised light images and reorientation of fibre vectors JF - Frontiers in Human Neuroscience N2 - Polarised light imaging (PLI) utilises the birefringence of the myelin sheaths in order to visualise the orientation of nerve fibres in microtome sections of adult human post-mortem brains at ultra-high spatial resolution. The preparation of post-mortem brains for PLI involves fixation, freezing and cutting into 100-μm-thick sections. Hence, geometrical distortions of histological sections are inevitable and have to be removed for 3D reconstruction and subsequent fibre tracking. We here present a processing pipeline for 3D reconstruction of these sections using PLI derived multimodal images of post-mortem brains. Blockface images of the brains were obtained during cutting; they serve as reference data for alignment and elimination of distortion artefacts. In addition to the spatial image transformation, fibre orientation vectors were reoriented using the transformation fields, which consider both affine and subsequent non-linear registration. The application of this registration and reorientation approach results in a smooth fibre vector field, which reflects brain morphology. PLI combined with 3D reconstruction and fibre tracking is a powerful tool for human brain mapping. It can also serve as an independent method for evaluating in vivo fibre tractography. KW - Bildgebendes Verfahren KW - Dreidimensionale Bildverarbeitung KW - Polarisiertes Licht KW - Gehirnkarte Y1 - 2010 U6 - https://doi.org/10.3389/neuro.09.009.2010 VL - 4 ER - TY - JOUR A1 - Axer, Markus A1 - Amunts, Katrin A1 - Gräßel, David A1 - Palm, Christoph A1 - Dammers, Jürgen A1 - Axer, Hubertus A1 - Pietrzyk, Uwe A1 - Zilles, Karl T1 - Novel Approach to the Human Connectome BT - Ultra-High Resolution Mapping of Fiber Tracts in the Brain JF - NeuroImage N2 - Signal transmission between different brain regions requires connecting fiber tracts, the structural basis of the human connectome. In contrast to animal brains, where a multitude of tract tracing methods can be used, magnetic resonance (MR)-based diffusion imaging is presently the only promising approach to study fiber tracts between specific human brain regions. However, this procedure has various inherent restrictions caused by its relatively low spatial resolution. Here, we introduce 3D-polarized light imaging (3D-PLI) to map the three-dimensional course of fiber tracts in the human brain with a resolution at a submillimeter scale based on a voxel size of 100 μm isotropic or less. 3D-PLI demonstrates nerve fibers by utilizing their intrinsic birefringence of myelin sheaths surrounding axons. This optical method enables the demonstration of 3D fiber orientations in serial microtome sections of entire human brains. Examples for the feasibility of this novel approach are given here. 3D-PLI enables the study of brain regions of intense fiber crossing in unprecedented detail, and provides an independent evaluation of fiber tracts derived from diffusion imaging data. KW - Connectome KW - Human brain KW - Method KW - Polarized light imaging KW - Tractography KW - Systems biology KW - Bildgebendes Verfahren KW - Dreidimensionale Bildverarbeitung KW - Polarisiertes Licht KW - Gehirnkarte Y1 - 2011 U6 - https://doi.org/10.1016/j.neuroimage.2010.08.075 VL - 54 IS - 2 SP - 1091 EP - 1101 ER - TY - GEN A1 - Gräßel, David A1 - Axer, Markus A1 - Palm, Christoph A1 - Dammers, Jürgen A1 - Amunts, Katrin A1 - Pietrzyk, Uwe A1 - Zilles, Karl T1 - Visualization of Fiber Tracts in the Postmortem Human Brain by Means of Polarized Light T2 - NeuroImage KW - Gehirn KW - Bildgebendes Verfahren KW - Polarisiertes Licht KW - Pathologische Anatomie Y1 - 2009 U6 - https://doi.org/10.1016/S1053-8119(09)71415-6 VL - 47 IS - Suppl. 1 SP - 142 ER - TY - JOUR A1 - Souza Jr., Luis Antonio de A1 - Mendel, Robert A1 - Strasser, Sophia A1 - Ebigbo, Alanna A1 - Probst, Andreas A1 - Messmann, Helmut A1 - Papa, João Paulo A1 - Palm, Christoph T1 - Convolutional Neural Networks for the evaluation of cancer in Barrett’s esophagus: Explainable AI to lighten up the black-box JF - Computers in Biology and Medicine N2 - Even though artificial intelligence and machine learning have demonstrated remarkable performances in medical image computing, their level of accountability and transparency must be provided in such evaluations. The reliability related to machine learning predictions must be explained and interpreted, especially if diagnosis support is addressed. For this task, the black-box nature of deep learning techniques must be lightened up to transfer its promising results into clinical practice. Hence, we aim to investigate the use of explainable artificial intelligence techniques to quantitatively highlight discriminative regions during the classification of earlycancerous tissues in Barrett’s esophagus-diagnosed patients. Four Convolutional Neural Network models (AlexNet, SqueezeNet, ResNet50, and VGG16) were analyzed using five different interpretation techniques (saliency, guided backpropagation, integrated gradients, input × gradients, and DeepLIFT) to compare their agreement with experts’ previous annotations of cancerous tissue. We could show that saliency attributes match best with the manual experts’ delineations. Moreover, there is moderate to high correlation between the sensitivity of a model and the human-and-computer agreement. The results also lightened that the higher the model’s sensitivity, the stronger the correlation of human and computational segmentation agreement. We observed a relevant relation between computational learning and experts’ insights, demonstrating how human knowledge may influence the correct computational learning. KW - Deep Learning KW - Künstliche Intelligenz KW - Computerunterstützte Medizin KW - Barrett's esophagus KW - Adenocarcinoma KW - Machine learning KW - Explainable artificial intelligence KW - Computer-aided diagnosis Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:898-opus4-20126 SN - 0010-4825 VL - 135 SP - 1 EP - 14 PB - Elsevier ER - TY - JOUR A1 - Palm, Christoph T1 - Color Texture Classification by Integrative Co-Occurrence Matrices JF - Pattern Recognition N2 - Integrative Co-occurrence matrices are introduced as novel features for color texture classification. The extended Co-occurrence notation allows the comparison between integrative and parallel color texture concepts. The information profit of the new matrices is shown quantitatively using the Kolmogorov distance and by extensive classification experiments on two datasets. Applying them to the RGB and the LUV color space the combined color and intensity textures are studied and the existence of intensity independent pure color patterns is demonstrated. The results are compared with two baselines: gray-scale texture analysis and color histogram analysis. The novel features improve the classification results up to 20% and 32% for the first and second baseline, respectively. KW - Color texture KW - Co-occurrence matrix KW - Integrative features KW - KolmogKorov distance KW - Image classification Y1 - 2004 U6 - https://doi.org/10.1016/j.patcog.2003.09.010 VL - 37 IS - 5 SP - 965 EP - 976 ER - TY - JOUR A1 - Brown, Peter A1 - Consortium, RELISH A1 - Zhou, Yaoqi A1 - Palm, Christoph T1 - Large expert-curated database for benchmarking document similarity detection in biomedical literature search JF - Database N2 - Document recommendation systems for locating relevant literature have mostly relied on methods developed a decade ago. This is largely due to the lack of a large offline gold-standard benchmark of relevant documents that cover a variety of research fields such that newly developed literature search techniques can be compared, improved and translated into practice. To overcome this bottleneck, we have established the RElevant LIterature SearcH consortium consisting of more than 1500 scientists from 84 countries, who have collectively annotated the relevance of over 180 000 PubMed-listed articles with regard to their respective seed (input) article/s. The majority of annotations were contributed by highly experienced, original authors of the seed articles. The collected data cover 76% of all unique PubMed Medical Subject Headings descriptors. No systematic biases were observed across different experience levels, research fields or time spent on annotations. More importantly, annotations of the same document pairs contributed by different scientists were highly concordant. We further show that the three representative baseline methods used to generate recommended articles for evaluation (Okapi Best Matching 25, Term Frequency–Inverse Document Frequency and PubMed Related Articles) had similar overall performances. Additionally, we found that these methods each tend to produce distinct collections of recommended articles, suggesting that a hybrid method may be required to completely capture all relevant articles. The established database server located at https://relishdb.ict.griffith.edu.au is freely available for the downloading of annotation data and the blind testing of new methods. We expect that this benchmark will be useful for stimulating the development of new powerful techniques for title and title/abstract-based search engines for relevant articles in biomedical research. KW - Information Retrieval KW - Indexierung KW - Literaturdatenbank KW - Dokument KW - Ähnlichkeitssuche KW - Suchmaschine Y1 - 2019 U6 - https://doi.org/10.1093/database/baz085 VL - 2019 SP - 1 EP - 66 PB - Oxford University Pres ER - TY - GEN A1 - Ebigbo, Alanna A1 - Mendel, Robert A1 - Probst, Andreas A1 - Manzeneder, Johannes A1 - Souza Jr., Luis Antonio de A1 - Papa, João Paulo A1 - Palm, Christoph A1 - Messmann, Helmut T1 - Artificial Intelligence in Early Barrett's Cancer: The Segmentation Task T2 - Endoscopy N2 - Aims: The delineation of outer margins of early Barrett's cancer can be challenging even for experienced endoscopists. Artificial intelligence (AI) could assist endoscopists faced with this task. As of date, there is very limited experience in this domain. In this study, we demonstrate the measure of overlap (Dice coefficient = D) between highly experienced Barrett endoscopists and an AI system in the delineation of cancer margins (segmentation task). Methods: An AI system with a deep convolutional neural network (CNN) was trained and tested on high-definition endoscopic images of early Barrett's cancer (n = 33) and normal Barrett's mucosa (n = 41). The reference standard for the segmentation task were the manual delineations of tumor margins by three highly experienced Barrett endoscopists. Training of the AI system included patch generation, patch augmentation and adjustment of the CNN weights. Then, the segmentation results from patch classification and thresholding of the class probabilities. Segmentation results were evaluated using the Dice coefficient (D). Results: The Dice coefficient (D) which can range between 0 (no overlap) and 1 (complete overlap) was computed only for images correctly classified by the AI-system as cancerous. At a threshold of t = 0.5, a mean value of D = 0.72 was computed. Conclusions: AI with CNN performed reasonably well in the segmentation of the tumor region in Barrett's cancer, at least when compared with expert Barrett's endoscopists. AI holds a lot of promise as a tool for better visualization of tumor margins but may need further improvement and enhancement especially in real-time settings. KW - Speiseröhrenkrankheit KW - Maschinelles Lernen KW - Barrett's esphagus KW - Deep Learning KW - Segmentation Y1 - 2019 U6 - https://doi.org/10.1055/s-0039-1681187 VL - 51 IS - 04 SP - 6 PB - Georg Thieme Verlag CY - Stuttgart ER - TY - CHAP A1 - Palm, Christoph A1 - Scholl, Ingrid A1 - Lehmann, Thomas M. A1 - Spitzer, Klaus ED - Greiser, E. ED - Wischnewsky, M. T1 - Nutzung eines Farbkonstanz-Algorithmus zur Entfernung von Glanzlichtern in laryngoskopischen Bildern T2 - Methoden der Medizinischen Informatik, Biometrie und Epidemiologie in der modernen Informationsgesellschaft N2 - 1 Einführung Funktionelle und organische Störungen im Larynx beeinträchtigen die Ausdrucksfähigkeit des Menschen. Zur Diagnostik und Verlaufkontrolle werden die Stimmlippen im Larynx mit Hilfe der Video-Laryngoskopie aufgenommen. Zur optimalen Farbmessung wird dazu an das Lupenendoskop eine 3-Chip-CCD-Kamera angeschlossen, die eine unabhängige Aufnahme der drei Farbkanäle erlaubt. Die bisherige subjektive Befundung ist von der Erfahrung des Untersuchers abhängig und läßt nur eine grobe Klassifikation der Krankheitsbilder zu. Zur Objektivierung werden daher quantitative Parameter für Farbe, Textur und Schwingung entwickelt. Neben dem Einfluß der wechselnden Lichtquellenfarbe auf den Farbeindruck ist die Sekretauflage auf den Stimmlippen ein Problem bei der Farb-und Texturanalyse. Sie kann zu ausgedehnten Glanzlichtern führen und so weite Bereiche der Stimmlippen für die Farb-und Texturanalyse unbrauchbar machen. Dieser Beitrag stellt einen Farbkonstanz-Algorithmus vor, der unabhängig von der Lichtquelle quantitative Farbwerte des Gewebes liefert und die Glanzlichtdetektion und -elimination ermöglicht. 2 Methodik Ziel des Farbkonstanz-Algorithmus ist die Trennung von Lichtquellen-und Gewebefarbe. Unter Verwendung des dichromatischen Reflexionsmodells [1] kann die Oberflächenreflexion mit der Farbe der Lichtquelle und die Körperreflexion mit der Gewebefarbe identifiziert werden. Der Farbeindruck entsteht aus der Linearkombination beider Farbkomponenten. Ihre Gewichtung ist von der Aufnahmegeometrie abhängig, insbesondere vom Winkel zwischen Oberflächennormalen und dem Positionsvektor der Lichtquelle. In einem zweistufigen Verfahren wird zunächst die Lichtquellenfarbe geschätzt, dann die Gewebefarbe ermittelt. Hieraus können beide Farbanteile durch die Berechnung der Gewichtsfaktoren pixelweise getrennt werden. KW - Farbkonstanz KW - Glanzlichtelimination KW - medizinische Bildverarbeitung KW - dichromatisches Reflexionsmodell Y1 - 1998 SN - 9783820813357 SP - 300 EP - 303 PB - MMV Medien und Medizin CY - München ER - TY - CHAP A1 - Chang, Ching-Sheng A1 - Lin, Jin-Fa A1 - Lee, Ming-Ching A1 - Palm, Christoph ED - Tolxdorff, Thomas ED - Deserno, Thomas M. ED - Handels, Heinz ED - Maier, Andreas ED - Maier-Hein, Klaus H. ED - Palm, Christoph T1 - Semantic Lung Segmentation Using Convolutional Neural Networks T2 - Bildverarbeitung für die Medizin 2020. Algorithmen - Systeme - Anwendungen. Proceedings des Workshops vom 15. bis 17. März 2020 in Berlin N2 - Chest X-Ray (CXR) images as part of a non-invasive diagnosis method are commonly used in today’s medical workflow. In traditional methods, physicians usually use their experience to interpret CXR images, however, there is a large interobserver variance. Computer vision may be used as a standard for assisted diagnosis. In this study, we applied an encoder-decoder neural network architecture for automatic lung region detection. We compared a three-class approach (left lung, right lung, background) and a two-class approach (lung, background). The differentiation of left and right lungs as direct result of a semantic segmentation on basis of neural nets rather than post-processing a lung-background segmentation is done here for the first time. Our evaluation was done on the NIH Chest X-ray dataset, from which 1736 images were extracted and manually annotated. We achieved 94:9% mIoU and 92% mIoU as segmentation quality measures for the two-class-model and the three-class-model, respectively. This result is very promising for the segmentation of lung regions having the simultaneous classification of left and right lung in mind. KW - Neuronales Netz KW - Segmentierung KW - Brustkorb KW - Deep Learning KW - Encoder-Decoder Network KW - Chest X-Ray Y1 - 2020 SN - 978-3-658-29266-9 U6 - https://doi.org/10.1007/978-3-658-29267-6_17 SP - 75 EP - 80 PB - Springer Vieweg CY - Wiesbaden ER - TY - CHAP A1 - Middel, Luise A1 - Palm, Christoph A1 - Erdt, Marius T1 - Synthesis of Medical Images Using GANs T2 - Uncertainty for safe utilization of machine learning in medical imaging and clinical image-based procedures. First International Workshop, UNSURE 2019, and 8th International Workshop, CLIP 2019, held in conjunction with MICCAI 2019, Shenzhen, China, October 17, 2019 N2 - The success of artificial intelligence in medicine is based on the need for large amounts of high quality training data. Sharing of medical image data, however, is often restricted by laws such as doctor-patient confidentiality. Although there are publicly available medical datasets, their quality and quantity are often low. Moreover, datasets are often imbalanced and only represent a fraction of the images generated in hospitals or clinics and can thus usually only be used as training data for specific problems. The introduction of generative adversarial networks (GANs) provides a mean to generate artificial images by training two convolutional networks. This paper proposes a method which uses GANs trained on medical images in order to generate a large number of artificial images that could be used to train other artificial intelligence algorithms. This work is a first step towards alleviating data privacy concerns and being able to publicly share data that still contains a substantial amount of the information in the original private data. The method has been evaluated on several public datasets and quantitative and qualitative tests showing promising results. KW - Neuronale Netze KW - Deep Learning KW - Generative adversarial networks KW - Machine Learning KW - Artificial Intelligence KW - Data privacy KW - Deep Learning KW - Bilderzeugung KW - Datenschutz Y1 - 2019 SN - 978-3-030-32688-3 U6 - https://doi.org/10.1007/978-3-030-32689-0_13 SN - 0302-9743 SP - 125 EP - 134 PB - Springer Nature CY - Cham ER - TY - CHAP A1 - Weiherer, Maximilian A1 - Zorn, Martin A1 - Wittenberg, Thomas A1 - Palm, Christoph ED - Tolxdorff, Thomas ED - Deserno, Thomas M. ED - Handels, Heinz ED - Maier, Andreas ED - Maier-Hein, Klaus H. ED - Palm, Christoph T1 - Retrospective Color Shading Correction for Endoscopic Images T2 - Bildverarbeitung für die Medizin 2020. Algorithmen - Systeme - Anwendungen. Proceedings des Workshops vom 15. bis 17. März 2020 in Berlin N2 - In this paper, we address the problem of retrospective color shading correction. An extension of the established gray-level shading correction algorithm based on signal envelope (SE) estimation to color images is developed using principal color components. Compared to the probably most general shading correction algorithm based on entropy minimization, SE estimation does not need any computationally expensive optimization and thus can be implemented more effciently. We tested our new shading correction scheme on artificial as well as real endoscopic images and observed promising results. Additionally, an indepth analysis of the stop criterion used in the SE estimation algorithm is provided leading to the conclusion that a fixed, user-defined threshold is generally not feasible. Thus, we present new ideas how to develop a non-parametric version of the SE estimation algorithm using entropy. KW - Endoskopie KW - Bildgebendes Verfahren KW - Farbenraum KW - Graustufe Y1 - 2020 SN - 978-3-658-29266-9 U6 - https://doi.org/10.1007/978-3-658-29267-6 SP - 14 EP - 19 PB - Springer Vieweg CY - Wiesbaden ER - TY - JOUR A1 - Ebigbo, Alanna A1 - Mendel, Robert A1 - Probst, Andreas A1 - Manzeneder, Johannes A1 - Prinz, Friederike A1 - Souza Jr., Luis Antonio de A1 - Papa, João Paulo A1 - Palm, Christoph A1 - Messmann, Helmut T1 - Real-time use of artificial intelligence in the evaluation of cancer in Barrett’s oesophagus JF - Gut N2 - Based on previous work by our group with manual annotation of visible Barrett oesophagus (BE) cancer images, a real-time deep learning artificial intelligence (AI) system was developed. While an expert endoscopist conducts the endoscopic assessment of BE, our AI system captures random images from the real-time camera livestream and provides a global prediction (classification), as well as a dense prediction (segmentation) differentiating accurately between normal BE and early oesophageal adenocarcinoma (EAC). The AI system showed an accuracy of 89.9% on 14 cases with neoplastic BE. KW - Speiseröhrenkrankheit KW - Diagnose KW - Maschinelles Lernen KW - Barrett's esophagus KW - Deep learning KW - real-time Y1 - 2020 U6 - https://doi.org/10.1136/gutjnl-2019-319460 VL - 69 IS - 4 SP - 615 EP - 616 PB - BMJ CY - London ER - TY - JOUR A1 - Arribas, Julia A1 - Antonelli, Giulio A1 - Frazzoni, Leonardo A1 - Fuccio, Lorenzo A1 - Ebigbo, Alanna A1 - van der Sommen, Fons A1 - Ghatwary, Noha A1 - Palm, Christoph A1 - Coimbra, Miguel A1 - Renna, Francesco A1 - Bergman, Jacques J.G.H.M. A1 - Sharma, Prateek A1 - Messmann, Helmut A1 - Hassan, Cesare A1 - Dinis-Ribeiro, Mario J. T1 - Standalone performance of artificial intelligence for upper GI neoplasia: a meta-analysis JF - Gut N2 - Objective: Artificial intelligence (AI) may reduce underdiagnosed or overlooked upper GI (UGI) neoplastic and preneoplastic conditions, due to subtle appearance and low disease prevalence. Only disease-specific AI performances have been reported, generating uncertainty on its clinical value. Design: We searched PubMed, Embase and Scopus until July 2020, for studies on the diagnostic performance of AI in detection and characterisation of UGI lesions. Primary outcomes were pooled diagnostic accuracy, sensitivity and specificity of AI. Secondary outcomes were pooled positive (PPV) and negative (NPV) predictive values. We calculated pooled proportion rates (%), designed summary receiving operating characteristic curves with respective area under the curves (AUCs) and performed metaregression and sensitivity analysis. Results: Overall, 19 studies on detection of oesophageal squamous cell neoplasia (ESCN) or Barrett's esophagus-related neoplasia (BERN) or gastric adenocarcinoma (GCA) were included with 218, 445, 453 patients and 7976, 2340, 13 562 images, respectively. AI-sensitivity/specificity/PPV/NPV/positive likelihood ratio/negative likelihood ratio for UGI neoplasia detection were 90% (CI 85% to 94%)/89% (CI 85% to 92%)/87% (CI 83% to 91%)/91% (CI 87% to 94%)/8.2 (CI 5.7 to 11.7)/0.111 (CI 0.071 to 0.175), respectively, with an overall AUC of 0.95 (CI 0.93 to 0.97). No difference in AI performance across ESCN, BERN and GCA was found, AUC being 0.94 (CI 0.52 to 0.99), 0.96 (CI 0.95 to 0.98), 0.93 (CI 0.83 to 0.99), respectively. Overall, study quality was low, with high risk of selection bias. No significant publication bias was found. Conclusion: We found a high overall AI accuracy for the diagnosis of any neoplastic lesion of the UGI tract that was independent of the underlying condition. This may be expected to substantially reduce the miss rate of precancerous lesions and early cancer when implemented in clinical practice. KW - Artificial Intelligence Y1 - 2021 U6 - https://doi.org/10.1136/gutjnl-2020-321922 VL - 70 IS - 8 SP - 1458 EP - 1468 PB - BMJ CY - London ER - TY - JOUR A1 - Ebigbo, Alanna A1 - Mendel, Robert A1 - Rückert, Tobias A1 - Schuster, Laurin A1 - Probst, Andreas A1 - Manzeneder, Johannes A1 - Prinz, Friederike A1 - Mende, Matthias A1 - Steinbrück, Ingo A1 - Faiss, Siegbert A1 - Rauber, David A1 - Souza Jr., Luis Antonio de A1 - Papa, João Paulo A1 - Deprez, Pierre A1 - Oyama, Tsuneo A1 - Takahashi, Akiko A1 - Seewald, Stefan A1 - Sharma, Prateek A1 - Byrne, Michael F. A1 - Palm, Christoph A1 - Messmann, Helmut T1 - Endoscopic prediction of submucosal invasion in Barrett’s cancer with the use of Artificial Intelligence: A pilot Study JF - Endoscopy N2 - Background and aims: The accurate differentiation between T1a and T1b Barrett’s cancer has both therapeutic and prognostic implications but is challenging even for experienced physicians. We trained an Artificial Intelligence (AI) system on the basis of deep artificial neural networks (deep learning) to differentiate between T1a and T1b Barrett’s cancer white-light images. Methods: Endoscopic images from three tertiary care centres in Germany were collected retrospectively. A deep learning system was trained and tested using the principles of cross-validation. A total of 230 white-light endoscopic images (108 T1a and 122 T1b) was evaluated with the AI-system. For comparison, the images were also classified by experts specialized in endoscopic diagnosis and treatment of Barrett’s cancer. Results: The sensitivity, specificity, F1 and accuracy of the AI-system in the differentiation between T1a and T1b cancer lesions was 0.77, 0.64, 0.73 and 0.71, respectively. There was no statistically significant difference between the performance of the AI-system and that of human experts with sensitivity, specificity, F1 and accuracy of 0.63, 0.78, 0.67 and 0.70 respectively. Conclusion: This pilot study demonstrates the first multicenter application of an AI-based system in the prediction of submucosal invasion in endoscopic images of Barrett’s cancer. AI scored equal to international experts in the field, but more work is necessary to improve the system and apply it to video sequences and in a real-life setting. Nevertheless, the correct prediction of submucosal invasion in Barret´s cancer remains challenging for both experts and AI. KW - Maschinelles Lernen KW - Neuronales Netz KW - Speiseröhrenkrebs KW - Diagnose KW - Artificial Intelligence KW - Machine learning KW - Adenocarcinoma KW - Barrett’s cancer KW - submucosal invasion Y1 - 2021 U6 - https://doi.org/10.1055/a-1311-8570 VL - 53 IS - 09 SP - 878 EP - 883 PB - Thieme CY - Stuttgart ER - TY - JOUR A1 - Souza Jr., Luis Antonio de A1 - Passos, Leandro A. A1 - Mendel, Robert A1 - Ebigbo, Alanna A1 - Probst, Andreas A1 - Messmann, Helmut A1 - Palm, Christoph A1 - Papa, João Paulo T1 - Assisting Barrett's esophagus identification using endoscopic data augmentation based on Generative Adversarial Networks JF - Computers in Biology and Medicine N2 - Barrett's esophagus figured a swift rise in the number of cases in the past years. Although traditional diagnosis methods offered a vital role in early-stage treatment, they are generally time- and resource-consuming. In this context, computer-aided approaches for automatic diagnosis emerged in the literature since early detection is intrinsically related to remission probabilities. However, they still suffer from drawbacks because of the lack of available data for machine learning purposes, thus implying reduced recognition rates. This work introduces Generative Adversarial Networks to generate high-quality endoscopic images, thereby identifying Barrett's esophagus and adenocarcinoma more precisely. Further, Convolution Neural Networks are used for feature extraction and classification purposes. The proposed approach is validated over two datasets of endoscopic images, with the experiments conducted over the full and patch-split images. The application of Deep Convolutional Generative Adversarial Networks for the data augmentation step and LeNet-5 and AlexNet for the classification step allowed us to validate the proposed methodology over an extensive set of datasets (based on original and augmented sets), reaching results of 90% of accuracy for the patch-based approach and 85% for the image-based approach. Both results are based on augmented datasets and are statistically different from the ones obtained in the original datasets of the same kind. Moreover, the impact of data augmentation was evaluated in the context of image description and classification, and the results obtained using synthetic images outperformed the ones over the original datasets, as well as other recent approaches from the literature. Such results suggest promising insights related to the importance of proper data for the accurate classification concerning computer-assisted Barrett's esophagus and adenocarcinoma detection. KW - Maschinelles Lernen KW - Barrett's esophagus KW - Machine learning KW - Adenocarcinoma KW - Generative adversarial networks KW - Neuronales Netz KW - Adenocarcinom KW - Speiseröhrenkrebs KW - Diagnose Y1 - 2020 U6 - https://doi.org/10.1016/j.compbiomed.2020.104029 VL - 126 IS - November PB - Elsevier ER - TY - JOUR A1 - Mendel, Robert A1 - Rauber, David A1 - Souza Jr., Luis Antonio de A1 - Papa, João Paulo A1 - Palm, Christoph T1 - Error-Correcting Mean-Teacher: Corrections instead of consistency-targets applied to semi-supervised medical image segmentation JF - Computers in Biology and Medicine N2 - Semantic segmentation is an essential task in medical imaging research. Many powerful deep-learning-based approaches can be employed for this problem, but they are dependent on the availability of an expansive labeled dataset. In this work, we augment such supervised segmentation models to be suitable for learning from unlabeled data. Our semi-supervised approach, termed Error-Correcting Mean-Teacher, uses an exponential moving average model like the original Mean Teacher but introduces our new paradigm of error correction. The original segmentation network is augmented to handle this secondary correction task. Both tasks build upon the core feature extraction layers of the model. For the correction task, features detected in the input image are fused with features detected in the predicted segmentation and further processed with task-specific decoder layers. The combination of image and segmentation features allows the model to correct present mistakes in the given input pair. The correction task is trained jointly on the labeled data. On unlabeled data, the exponential moving average of the original network corrects the student’s prediction. The combined outputs of the students’ prediction with the teachers’ correction form the basis for the semi-supervised update. We evaluate our method with the 2017 and 2018 Robotic Scene Segmentation data, the ISIC 2017 and the BraTS 2020 Challenges, a proprietary Endoscopic Submucosal Dissection dataset, Cityscapes, and Pascal VOC 2012. Additionally, we analyze the impact of the individual components and examine the behavior when the amount of labeled data varies, with experiments performed on two distinct segmentation architectures. Our method shows improvements in terms of the mean Intersection over Union over the supervised baseline and competing methods. Code is available at https://github.com/CloneRob/ECMT. KW - Semi-supervised Segmentation KW - Mean-Teacher KW - Pseudo-labels KW - Medical Imaging Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:898-opus4-57790 SN - 0010-4825 N1 - Corresponding author der OTH Regensburg: Robert Mendel VL - 154 IS - March PB - Elsevier ER - TY - JOUR A1 - Maier, Andreas A1 - Deserno, Thomas M. A1 - Handels, Heinz A1 - Maier-Hein, Klaus H. A1 - Palm, Christoph A1 - Tolxdorff, Thomas T1 - IJCARS: BVM 2021 special issue JF - International Journal of Computer Assisted Radiology and Surgery N2 - The German workshop on medical image computing (BVM) has been held in different locations in Germany for more than 20 years. In terms of content, BVM focused on the computer-aided analysis of medical image data with a wide range of applications, e.g. in the area of imaging, diagnostics, operation planning, computer-aided intervention and visualization. During this time, there have been remarkable methodological developments and upheavals, on which the BVM community has worked intensively. The area of machine learning should be emphasized, which has led to significant improvements, especially for tasks of classification and segmentation, but increasingly also in image formation and registration. As a result, work in connection with deep learning now dominates the BVM. These developments have also contributed to the establishment of medical image processing at the interface between computer science and medicine as one of the key technologies for the digitization of the health system. In addition to the presentation of current research results, a central aspect of the BVM is primarily the promotion of young scientists from the diverse BVM community, covering not only Germany but also Austria, Switzerland, The Netherland and other European neighbors. The conference serves primarily doctoral students and postdocs, but also students with excellent bachelor and master theses as a platform to present their work, to enter into professional discourse with the community, and to establish networks with specialist colleagues. Despite the many conferences and congresses that are also relevant for medical image processing, the BVM has therefore lost none of its importance and attractiveness and has retained its permanent place in the annual conference rhythm. Building on this foundation, there are some innovations and changes this year. The BVM 2021 was organized for the first time at the Ostbayerische Technische Hochschule Regensburg (OTH Regensburg, a technical university of applied sciences). After Aachen, Berlin, Erlangen, Freiburg, Hamburg, Heidelberg, Leipzig, Lübeck, and Munich, Regensburg is not just a new venue. OTH Regensburg is the first representative of the universities of applied sciences (HAW) to organize the conference, which differs to universities, university hospitals, or research centers like Fraunhofer or Helmholtz. This also considers the further development of the research landscape in Germany, where HAWs increasingly contribute to applied research in addition to their focus on teaching. This development is also reflected in the contributions submitted to the BVM in recent years. At BVM 2021, which was held in a virtual format for the first time due to the Corona pandemic, an attractive and high-quality program was offered. Fortunately, the number of submissions increased significantly. Out of 97 submissions, 26 presentations, 51 posters and 5 software demonstrations were accepted via an anonymized reviewing process with three reviews each. The three best works have been awarded BVM prizes, selected by a separate committee. Based on these high-quality submissions, we are able to present another special issue in the International Journal of Computer Assisted Radiology and Surgery (IJCARS). Out of the 97 submissions, the ones with the highest scores have been invited to submit an extended version of their paper to be presented in IJCARS. As a result, we are now able to present this special issue with seven excellent articles. Many submissions focus on machine learning in a medical context. KW - Medical Image Computing KW - Bildgebendes Verfahren KW - Medizin Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:898-opus4-21666 VL - 16 SP - 2067 EP - 2068 PB - Springer ER -