TY - CHAP A1 - Metzler, V. A1 - Aach, T. A1 - Palm, Christoph A1 - Lehmann, Thomas M. T1 - Texture Classification of Graylevel Images by Multiscale Cross-Co-Occurrence Matrices T2 - Proceedings 15th International Conference on Pattern Recognition (ICPR-2000) N2 - Local gray level dependencies of natural images can be modelled by means of co-occurrence matrices containing joint probabilities of gray-level pairs. Texture, however, is a resolution-dependent phenomenon and hence, classification depends on the chosen scale. Since there is no optimal scale for all textures we employ a multiscale approach that acquires textural features at several scales. Thus linear and nonlinear scale-spaces are analyzed by multiscale co-occurrence matrices that describe the statistical behavior of a texture in scale-space. Classification is then performed on the basis of texture features taken from the individual scale with the highest discriminatory power. By considering cross-scale occurrences of gray level pairs, the impact of filters on the feature is described and used for classification of natural textures. This novel method was found to improve classification rates of the common co-occurrence matrix approach on standard textures significantly. Y1 - 2000 U6 - https://doi.org/10.1109/ICPR.2000.906133 SP - 549 EP - 552 ER - TY - GEN A1 - Meinikheim, Michael A1 - Mendel, Robert A1 - Probst, Andreas A1 - Scheppach, Markus W. A1 - Messmann, Helmut A1 - Palm, Christoph A1 - Ebigbo, Alanna T1 - Optical Flow als Methode zur Qualitätssicherung KI-unterstützter Untersuchungen von Barrett-Ösophagus und Barrett-Ösophagus assoziierten Neoplasien T2 - Zeitschrift für Gastroenterologie N2 - Einleitung  Übermäßige Bewegung im Bild kann die Performance von auf künstlicher Intelligenz (KI) basierenden klinischen Entscheidungsunterstützungssystemen (CDSS) reduzieren. Optical Flow (OF) ist eine Methode zur Lokalisierung und Quantifizierung von Bewegungen zwischen aufeinanderfolgenden Bildern. Ziel  Ziel ist es, die Mensch-Computer-Interaktion (HCI) zu verbessern und Endoskopiker die unser KI-System „Barrett-Ampel“ zur Unterstützung bei der Beurteilung von Barrett-Ösophagus (BE) verwenden, ein Echtzeit-Feedback zur aktuellen Datenqualität anzubieten. Methodik  Dazu wurden unveränderte Videos in „Weißlicht“ (WL), „Narrow Band Imaging“ (NBI) und „Texture and Color Enhancement Imaging“ (TXI) von acht endoskopischen Untersuchungen von histologisch gesichertem BE und mit Barrett-Ösophagus assoziierten Neoplasien (BERN) durch unseren KI-Algorithmus analysiert. Der zur Bewertung der Bildqualität verwendete OF beinhaltete die mittlere Magnitude und die Entropie des Histogramms der Winkel. Frames wurden automatisch extrahiert, wenn die vordefinierten Schwellenwerte von 3,0 für die mittlere Magnitude und 9,0 für die Entropie des Histogramms der Winkel überschritten wurden. Experten sahen sich zunächst die Videos ohne KI-Unterstützung an und bewerteten, ob Störfaktoren die Sicherheit mit der eine Diagnose im vorliegenden Fall gestellt werden kann negativ beeinflussen. Anschließend überprüften sie die extrahierten Frames. Ergebnis  Gleichmäßige Bewegung in eine Richtung, wie etwa beim Vorschieben des Endoskops, spiegelte sich, bei insignifikant veränderter Entropie, in einer Erhöhung der Magnitude wider. Chaotische Bewegung, zum Beispiel während dem Spülen, war mit erhöhter Entropie assoziiert. Insgesamt war eine unruhige endoskopische Darstellung, Flüssigkeit sowie übermäßige Ösophagusmotilität mit erhöhtem OF assoziiert und korrelierte mit der Meinung der Experten über die Qualität der Videos. Der OF und die subjektive Wahrnehmung der Experten über die Verwertbarkeit der vorliegenden Bildsequenzen korrelierten direkt proportional. Wenn die vordefinierten Schwellenwerte des OF überschritten wurden, war die damit verbundene Bildqualität in 94% der Fälle für eine definitive Interpretation auch für Experten unzureichend. Schlussfolgerung  OF hat das Potenzial Endoskopiker ein Echtzeit-Feedback über die Qualität des Dateninputs zu bieten und so nicht nur die HCI zu verbessern, sondern auch die optimale Performance von KI-Algorithmen zu ermöglichen. KW - Optical Flow Y1 - 2022 U6 - https://doi.org/10.1055/s-0042-1754997 VL - 60 IS - 08 PB - Georg Thieme Verlag CY - Stuttgart ER - TY - GEN A1 - Meinikheim, Michael A1 - Mendel, Robert A1 - Scheppach, Markus W. A1 - Probst, Andreas A1 - Prinz, Friederike A1 - Schwamberger, Tanja A1 - Schlottmann, Jakob A1 - Gölder, Stefan Karl A1 - Walter, Benjamin A1 - Steinbrück, Ingo A1 - Palm, Christoph A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - Einsatz von künstlicher Intelligenz (KI) als Entscheidungsunterstützungssystem für nicht-Experten bei der Beurteilung von Barrett-Ösophagus assoziierten Neoplasien (BERN) T2 - Zeitschrift für Gastroenterologie N2 - Einleitung Die sichere Detektion und Charakterisierung von Barrett-Ösophagus assoziierten Neoplasien (BERN) stellt selbst für erfahrene Endoskopiker eine Herausforderung dar. Ziel Ziel dieser Studie ist es, den Add-on Effekt eines künstlichen Intelligenz (KI) Systems (Barrett-Ampel) als Entscheidungsunterstüzungssystem für Endoskopiker ohne Expertise bei der Untersuchung von BERN zu evaluieren. Material und Methodik Zwölf Videos in „Weißlicht“ (WL), „narrow-band imaging“ (NBI) und „texture and color enhanced imaging“ (TXI) von histologisch bestätigten Barrett-Metaplasien oder BERN wurden von Experten und Untersuchern ohne Barrett-Expertise evaluiert. Die Probanden wurden dazu aufgefordert in den Videos auftauchende BERN zu identifizieren und gegebenenfalls die optimale Biopsiestelle zu markieren. Unser KI-System wurde demselben Test unterzogen, wobei dieses BERN in Echtzeit segmentierte und farblich von umliegendem Epithel differenzierte. Anschließend wurden den Probanden die Videos mit zusätzlicher KI-Unterstützung gezeigt. Basierend auf dieser neuen Information, wurden die Probanden zu einer Reevaluation ihrer initialen Beurteilung aufgefordert. Ergebnisse Die „Barrett-Ampel“ identifizierte unabhängig von den verwendeten Darstellungsmodi (WL, NBI, TXI) alle BERN. Zwei entzündlich veränderte Läsionen wurden fehlinterpretiert (Genauigkeit=75%). Während Experten vergleichbare Ergebnisse erzielten (Genauigkeit=70,8%), hatten Endoskopiker ohne Expertise bei der Beurteilung von Barrett-Metaplasien eine Genauigkeit von lediglich 58,3%. Wurden die nicht-Experten allerdings von unserem KI-System unterstützt, erreichten diese eine Genauigkeit von 75%. Zusammenfassung Unser KI-System hat das Potential als Entscheidungsunterstützungssystem bei der Differenzierung zwischen Barrett-Metaplasie und BERN zu fungieren und so Endoskopiker ohne entsprechende Expertise zu assistieren. Eine Limitation dieser Studie ist die niedrige Anzahl an eingeschlossenen Videos. Um die Ergebnisse dieser Studie zu bestätigen, müssen randomisierte kontrollierte klinische Studien durchgeführt werden. KW - Barrett-Ösophagus KW - Künstliche Intelligenz Y1 - 2022 U6 - https://doi.org/10.1055/s-0042-1745653 VL - 60 IS - 4 SP - 251 PB - Thieme CY - Stuttgart ER - TY - GEN A1 - Meinikheim, Michael A1 - Mendel, Robert A1 - Scheppach, Markus W. A1 - Probst, Andreas A1 - Prinz, Friederike A1 - Schwamberger, Tanja A1 - Schlottmann, Jakob A1 - Gölder, Stefan Karl A1 - Walter, Benjamin A1 - Steinbrück, Ingo A1 - Palm, Christoph A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - INFLUENCE OF AN ARTIFICIAL INTELLIGENCE (AI) BASED DECISION SUPPORT SYSTEM (DSS) ON THE DIAGNOSTIC PERFORMANCE OF NON-EXPERTS IN BARRETT´S ESOPHAGUS RELATED NEOPLASIA (BERN) T2 - Endoscopy N2 - Aims Barrett´s esophagus related neoplasia (BERN) is difficult to detect and characterize during endoscopy, even for expert endoscopists. We aimed to assess the add-on effect of an Artificial Intelligence (AI) algorithm (Barrett-Ampel) as a decision support system (DSS) for non-expert endoscopists in the evaluation of Barrett’s esophagus (BE) and BERN. Methods Twelve videos with multimodal imaging white light (WL), narrow-band imaging (NBI), texture and color enhanced imaging (TXI) of histologically confirmed BE and BERN were assessed by expert and non-expert endoscopists. For each video, endoscopists were asked to identify the area of BERN and decide on the biopsy spot. Videos were assessed by the AI algorithm and regions of BERN were highlighted in real-time by a transparent overlay. Finally, endoscopists were shown the AI videos and asked to either confirm or change their initial decision based on the AI support. Results Barrett-Ampel correctly identified all areas of BERN, irrespective of the imaging modality (WL, NBI, TXI), but misinterpreted two inflammatory lesions (Accuracy=75%). Expert endoscopists had a similar performance (Accuracy=70,8%), while non-experts had an accuracy of 58.3%. When AI was implemented as a DSS, non-expert endoscopists improved their diagnostic accuracy to 75%. Conclusions AI may have the potential to support non-expert endoscopists in the assessment of videos of BE and BERN. Limitations of this study include the low number of videos used. Randomized clinical trials in a real-life setting should be performed to confirm these results. KW - Artificial Intelligence KW - Barrett's Esophagus KW - Speiseröhrenkrankheit KW - Künstliche Intelligenz KW - Diagnose Y1 - 2022 U6 - https://doi.org/10.1055/s-00000012 VL - 54 IS - S 01 SP - S39 PB - Thieme ER - TY - JOUR A1 - Scheppach, Markus W. A1 - Mendel, Robert A1 - Probst, Andreas A1 - Meinikheim, Michael A1 - Palm, Christoph A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - ARTIFICIAL INTELLIGENCE (AI) – ASSISTED VESSEL AND TISSUE RECOGNITION IN THIRD-SPACE ENDOSCOPY JF - Endoscopy N2 - Aims Third-space endoscopy procedures such as endoscopic submucosal dissection (ESD) and peroral endoscopic myotomy (POEM) are complex interventions with elevated risk of operator-dependent adverse events, such as intra-procedural bleeding and perforation. We aimed to design an artificial intelligence clinical decision support solution (AI-CDSS, “Smart ESD”) for the detection and delineation of vessels, tissue structures, and instruments during third-space endoscopy procedures. Methods Twelve full-length third-space endoscopy videos were extracted from the Augsburg University Hospital database. 1686 frames were annotated for the following categories: Submucosal layer, blood vessels, electrosurgical knife and endoscopic instrument. A DeepLabv3+neural network with a 101-layer ResNet backbone was trained and validated internally. Finally, the ability of the AI system to detect visible vessels during ESD and POEM was determined on 24 separate video clips of 7 to 46 seconds duration and showing 33 predefined vessels. These video clips were also assessed by an expert in third-space endoscopy. Results Smart ESD showed a vessel detection rate (VDR) of 93.94%, while an average of 1.87 false positive signals were recorded per minute. VDR of the expert endoscopist was 90.1% with no false positive findings. On the internal validation data set using still images, the AI system demonstrated an Intersection over Union (IoU), mean Dice score and pixel accuracy of 63.47%, 76.18% and 86.61%, respectively. Conclusions This is the first AI-CDSS aiming to mitigate operator-dependent limitations during third-space endoscopy. Further clinical trials are underway to better understand the role of AI in such procedures. KW - Artificial Intelligence KW - Third-Space Endoscopy KW - Smart ESD Y1 - 2022 U6 - https://doi.org/10.1055/s-0042-1745037 VL - 54 IS - S01 SP - S175 PB - Thieme ER - TY - CHAP A1 - Weber Nunes, Danilo A1 - Rauber, David A1 - Palm, Christoph ED - Palm, Christoph ED - Breininger, Katharina ED - Deserno, Thomas M. ED - Handels, Heinz ED - Maier, Andreas ED - Maier-Hein, Klaus H. ED - Tolxdorff, Thomas T1 - Self-supervised 3D Vision Transformer Pre-training for Robust Brain Tumor Classification T2 - Bildverarbeitung für die Medizin 2025: Proceedings, German Conference on Medical Image Computing, Regensburg March 09-11, 2025 N2 - Brain tumors pose significant challenges in neurology, making precise classification crucial for prognosis and treatment planning. This work investigates the effectiveness of a self-supervised learning approach–masked autoencoding (MAE)–to pre-train a vision transformer (ViT) model for brain tumor classification. Our method uses non-domain specific data, leveraging the ADNI and OASIS-3 MRI datasets, which primarily focus on degenerative diseases, for pretraining. The model is subsequently fine-tuned and evaluated on the BraTS glioma and meningioma datasets, representing a novel use of these datasets for tumor classification. The pre-trained MAE ViT model achieves an average F1 score of 0.91 in a 5-fold cross-validation setting, outperforming the nnU-Net encoder trained from scratch, particularly under limited data conditions. These findings highlight the potential of self-supervised MAE in enhancing brain tumor classification accuracy, even with restricted labeled data. Y1 - 2025 U6 - https://doi.org/10.1007/978-3-658-47422-5_69 SP - 298 EP - 303 PB - Springer Vieweg CY - Wiesbaden ER - TY - CHAP A1 - Weiherer, Maximilian A1 - von Riedheim, Antonia A1 - Brébant, Vanessa A1 - Egger, Bernhard A1 - Palm, Christoph ED - Palm, Christoph ED - Breininger, Katharina ED - Deserno, Thomas M. ED - Handels, Heinz ED - Maier, Andreas ED - Maier-Hein, Klaus H. ED - Tolxdorff, Thomas T1 - iRBSM: A Deep Implicit 3D Breast Shape Model T2 - Bildverarbeitung für die Medizin 2025: Proceedings, German Conference on Medical Image Computing, Regensburg March 09-11, 2025 N2 - We present the first deep implicit 3D shape model of the female breast, building upon and improving the recently proposed Regensburg Breast Shape Model (RBSM). Compared to its PCA-based predecessor, our model employs implicit neural representations; hence, it can be trained on raw 3D breast scans and eliminates the need for computationally demanding non-rigid registration, a task that is particularly difficult for feature-less breast shapes. The resulting model, dubbed iRBSM, captures detailed surface geometry including fine structures such as nipples and belly buttons, is highly expressive, and outperforms the RBSM on different surface reconstruction tasks. Finally, leveraging the iRBSM, we present a prototype application to 3D reconstruct breast shapes from just a single image. Model and code publicly available at https://rbsm.re-mic.de/implicit. Y1 - 2025 U6 - https://doi.org/10.1007/978-3-658-47422-5_11 SP - 38 EP - 43 PB - Springer Vieweg CY - Wiesbaden ER - TY - CHAP A1 - Gutbrod, Max A1 - Geisler, Benedikt A1 - Rauber, David A1 - Palm, Christoph ED - Maier, Andreas ED - Deserno, Thomas M. ED - Handels, Heinz ED - Maier-Hein, Klaus H. ED - Palm, Christoph ED - Tolxdorff, Thomas T1 - Data Augmentation for Images of Chronic Foot Wounds T2 - Bildverarbeitung für die Medizin 2024: Proceedings, German Workshop on Medical Image Computing, March 10-12, 2024, Erlangen N2 - Training data for Neural Networks is often scarce in the medical domain, which often results in models that struggle to generalize and consequently showpoor performance on unseen datasets. Generally, adding augmentation methods to the training pipeline considerably enhances a model’s performance. Using the dataset of the Foot Ulcer Segmentation Challenge, we analyze two additional augmentation methods in the domain of chronic foot wounds - local warping of wound edges along with projection and blurring of shapes inside wounds. Our experiments show that improvements in the Dice similarity coefficient and Normalized Surface Distance metrics depend on a sensible selection of those augmentation methods. Y1 - 2024 U6 - https://doi.org/10.1007/978-3-658-44037-4_71 SP - 261 EP - 266 PB - Springer CY - Wiesbaden ER - TY - GEN A1 - Schroeder, Josef A. A1 - Semmelmann, Matthias A1 - Siegmund, Heiko A1 - Grafe, Claudia A1 - Evert, Matthias A1 - Palm, Christoph T1 - Improved interactive computer-assisted approach for evaluation of ultrastructural cilia abnormalities T2 - Ultrastructural Pathology KW - Zilie KW - Ultrastruktur KW - Anomalie KW - Bildverarbeitung KW - Computerunterstütztes Verfahren Y1 - 2017 U6 - https://doi.org/10.1080/01913123.2016.1270978 VL - 41 IS - 1 SP - 112 EP - 113 ER - TY - CHAP A1 - Rückert, Tobias A1 - Rieder, Maximilian A1 - Feussner, Hubertus A1 - Wilhelm, Dirk A1 - Rückert, Daniel A1 - Palm, Christoph ED - Maier, Andreas ED - Deserno, Thomas M. ED - Handels, Heinz ED - Maier-Hein, Klaus H. ED - Palm, Christoph ED - Tolxdorff, Thomas T1 - Smoke Classification in Laparoscopic Cholecystectomy Videos Incorporating Spatio-temporal Information T2 - Bildverarbeitung für die Medizin 2024: Proceedings, German Workshop on Medical Image Computing, March 10-12, 2024, Erlangen N2 - Heavy smoke development represents an important challenge for operating physicians during laparoscopic procedures and can potentially affect the success of an intervention due to reduced visibility and orientation. Reliable and accurate recognition of smoke is therefore a prerequisite for the use of downstream systems such as automated smoke evacuation systems. Current approaches distinguish between non-smoked and smoked frames but often ignore the temporal context inherent in endoscopic video data. In this work, we therefore present a method that utilizes the pixel-wise displacement from randomly sampled images to the preceding frames determined using the optical flow algorithm by providing the transformed magnitude of the displacement as an additional input to the network. Further, we incorporate the temporal context at evaluation time by applying an exponential moving average on the estimated class probabilities of the model output to obtain more stable and robust results over time. We evaluate our method on two convolutional-based and one state-of-the-art transformer architecture and show improvements in the classification results over a baseline approach, regardless of the network used. Y1 - 2024 U6 - https://doi.org/10.1007/978-3-658-44037-4_78 SP - 298 EP - 303 PB - Springeer CY - Wiesbaden ER - TY - INPR A1 - Mendel, Robert A1 - Rückert, Tobias A1 - Wilhelm, Dirk A1 - Rückert, Daniel A1 - Palm, Christoph T1 - Motion-Corrected Moving Average: Including Post-Hoc Temporal Information for Improved Video Segmentation N2 - Real-time computational speed and a high degree of precision are requirements for computer-assisted interventions. Applying a segmentation network to a medical video processing task can introduce significant inter-frame prediction noise. Existing approaches can reduce inconsistencies by including temporal information but often impose requirements on the architecture or dataset. This paper proposes a method to include temporal information in any segmentation model and, thus, a technique to improve video segmentation performance without alterations during training or additional labeling. With Motion-Corrected Moving Average, we refine the exponential moving average between the current and previous predictions. Using optical flow to estimate the movement between consecutive frames, we can shift the prior term in the moving-average calculation to align with the geometry of the current frame. The optical flow calculation does not require the output of the model and can therefore be performed in parallel, leading to no significant runtime penalty for our approach. We evaluate our approach on two publicly available segmentation datasets and two proprietary endoscopic datasets and show improvements over a baseline approach. KW - Deep Learning KW - Video KW - Segmentation Y1 - 2024 U6 - https://doi.org/10.48550/arXiv.2403.03120 ER - TY - GEN A1 - Scheppach, Markus W. A1 - Mendel, Robert A1 - Probst, Andreas A1 - Meinikheim, Michael A1 - Palm, Christoph A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - Artificial Intelligence (AI) – assisted vessel and tissue recognition during third space endoscopy (Smart ESD) T2 - Zeitschrift für Gastroenterologie N2 - Clinical setting  Third space procedures such as endoscopic submucosal dissection (ESD) and peroral endoscopic myotomy (POEM) are complex minimally invasive techniques with an elevated risk for operator-dependent adverse events such as bleeding and perforation. This risk arises from accidental dissection into the muscle layer or through submucosal blood vessels as the submucosal cutting plane within the expanding resection site is not always apparent. Deep learning algorithms have shown considerable potential for the detection and characterization of gastrointestinal lesions. So-called AI – clinical decision support solutions (AI-CDSS) are commercially available for polyp detection during colonoscopy. Until now, these computer programs have concentrated on diagnostics whereas an AI-CDSS for interventional endoscopy has not yet been introduced. We aimed to develop an AI-CDSS („Smart ESD“) for real-time intra-procedural detection and delineation of blood vessels, tissue structures and endoscopic instruments during third-space endoscopic procedures. Characteristics of Smart ESD  An AI-CDSS was invented that delineates blood vessels, tissue structures and endoscopic instruments during third-space endoscopy in real-time. The output can be displayed by an overlay over the endoscopic image with different modes of visualization, such as a color-coded semitransparent area overlay, or border tracing (demonstration video). Hereby the optimal layer for dissection can be visualized, which is close above or directly at the muscle layer, depending on the applied technique (ESD or POEM). Furthermore, relevant blood vessels (thickness> 1mm) are delineated. Spatial proximity between the electrosurgical knife and a blood vessel triggers a warning signal. By this guidance system, inadvertent dissection through blood vessels could be averted. Technical specifications  A DeepLabv3+ neural network architecture with KSAC and a 101-layer ResNeSt backbone was used for the development of Smart ESD. It was trained and validated with 2565 annotated still images from 27 full length third-space endoscopic videos. The annotation classes were blood vessel, submucosal layer, muscle layer, electrosurgical knife and endoscopic instrument shaft. A test on a separate data set yielded an intersection over union (IoU) of 68%, a Dice Score of 80% and a pixel accuracy of 87%, demonstrating a high overlap between expert and AI segmentation. Further experiments on standardized video clips showed a mean vessel detection rate (VDR) of 85% with values of 92%, 70% and 95% for POEM, rectal ESD and esophageal ESD respectively. False positive measurements occurred 0.75 times per minute. 7 out of 9 vessels which caused intraprocedural bleeding were caught by the algorithm, as well as both vessels which required hemostasis via hemostatic forceps. Future perspectives  Smart ESD performed well for vessel and tissue detection and delineation on still images, as well as on video clips. During a live demonstration in the endoscopy suite, clinical applicability of the innovation was examined. The lag time for processing of the live endoscopic image was too short to be visually detectable for the interventionist. Even though the algorithm could not be applied during actual dissection by the interventionist, Smart ESD appeared readily deployable during visual assessment by ESD experts. Therefore, we plan to conduct a clinical trial in order to obtain CE-certification of the algorithm. This new technology may improve procedural safety and speed, as well as training of modern minimally invasive endoscopic resection techniques. KW - Artificial Intelligence KW - Medical Image Computing KW - Endoscopy KW - Bildgebendes Verfahren KW - Medizin KW - Künstliche Intelligenz KW - Endoskopie Y1 - 2022 U6 - https://doi.org/10.1055/s-0042-1755110 VL - 60 IS - 08 PB - Georg Thieme Verlag CY - Stuttgart ER - TY - GEN A1 - Meinikheim, Michael A1 - Mendel, Robert A1 - Probst, Andreas A1 - Scheppach, Markus W. A1 - Messmann, Helmut A1 - Palm, Christoph A1 - Ebigbo, Alanna T1 - Barrett-Ampel T2 - Zeitschrift für Gastroenterologie N2 - Hintergrund  Adenokarzinome des Ösophagus sind bis heute mit einer infausten Prognose vergesellschaftet (1). Obwohl Endoskopiker mit Barrett-Ösophagus als Präkanzerose konfrontiert werden, ist vor allem für nicht-Experten die Differenzierung zwischen Barrett-Ösophagus ohne Dysplasie und assoziierten Neoplasien mitunter schwierig. Existierende Biopsieprotokolle (z.B. Seattle Protokoll) sind oftmals unzuverlässig (2). Eine frühzeitige Diagnose des Adenokarzinoms ist allerdings von fundamentaler Bedeutung für die Prognose des Patienten. Forschungsansatz  Auf der Grundlage dieser Problematik, entwickelten wir in Kooperation mit dem Forschungslabor „Regensburg Medical Image Computing (ReMIC)“ der OTH Regensburg ein auf künstlicher Intelligenz (KI) basiertes Entscheidungsunterstützungssystem (CDSS). Das auf einer DeepLabv3+ neuronalen Netzwerkarchitektur basierende CDSS differenziert mittels Mustererkennung Barrett- Ösophagus ohne Dysplasie von Barrett-Ösophagus mit Dysplasie bzw. Neoplasie („Klassifizierung“). Hierbei werden gemittelte Ausgabewahrscheinlichkeiten mit einem vom Benutzer definierten Schwellenwert verglichen. Für Vorhersagen, die den Schwellenwert überschreiten, berechnen wir die Kontur der Region und die Fläche. Sobald die vorhergesagte Läsion eine bestimmte Größe in der Eingabe überschreitet, heben wir sie und ihren Umriss hervor. So ermöglicht eine farbkodierte Visualisierung eine Abgrenzung zwischen Dysplasie bzw. Neoplasie und normalem Barrett-Epithel („Segmentierung“). In einer Studie an Bildern in „Weißlicht“ (WL) und „Narrow Band Imaging“ (NBI) demonstrierten wir eine Sensitivität von mehr als 90% und eine Spezifität von mehr als 80% (3). In einem nächsten Schritt, differenzierte unser KI-Algorithmus Barrett- Metaplasien von assoziierten Neoplasien anhand von zufällig abgegriffenen Bildern in Echtzeit mit einer Accuracy von 89.9% (4). Darauf folgend, entwickelten wir unser System dahingehend weiter, dass unser Algorithmus nun auch dazu in der Lage ist, Untersuchungsvideos in WL, NBI und „Texture and Color Enhancement Imaging“ (TXI) in Echtzeit zu analysieren (5). Aktuell führen wir eine Studie in einem randomisiert-kontrollierten Ansatz an unveränderten Untersuchungsvideos in WL, NBI und TXI durch. Ausblick  Um Patienten mit aus Barrett-Metaplasien resultierenden Neoplasien frühestmöglich an „High-Volume“-Zentren überweisen zu können, soll unser KI-Algorithmus zukünftig vor allem Endoskopiker ohne extensive Erfahrung bei der Beurteilung von Barrett- Ösophagus in der Krebsfrüherkennung unterstützen. KW - Barrett-Ösophagus KW - Adenokarzinom KW - Künstliche Intelligenz KW - Speiseröhrenkrebs KW - Diagnose KW - Künstliche Intelligenz Y1 - 2022 U6 - https://doi.org/10.1055/s-0042-1755109 VL - 60 IS - 08 PB - Georg Thieme Verlag CY - Stuttgart ER - TY - GEN A1 - Scheppach, Markus W. A1 - Mendel, Robert A1 - Probst, Andreas A1 - Meinikheim, Michael A1 - Palm, Christoph A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - Intraprozedurale Strukturerkennung bei Third-Space Endoskopie mithilfe eines Deep-Learning Algorithmus T2 - Zeitschrift für Gastroenterologie N2 - Einleitung Third-Space Interventionen wie die endoskopische Submukosadissektion (ESD) und die perorale endoskopische Myotomie (POEM) sind technisch anspruchsvoll und mit einem erhöhten Risiko für intraprozedurale Komplikationen wie Blutung oder Perforation assoziiert. Moderne Computerprogramme zur Unterstützung bei diagnostischen Entscheidungen werden unter Einsatz von künstlicher Intelligenz (KI) in der Endoskopie bereits erfolgreich eingesetzt. Ziel der vorliegenden Arbeit war es, relevante anatomische Strukturen mithilfe eines Deep-Learning Algorithmus zu detektieren und segmentieren, um die Sicherheit und Anwendbarkeit von ESD und POEM zu erhöhen. Methoden Zwölf Videoaufnahmen in voller Länge von Third-Space Endoskopien wurden aus der Datenbank des Universitätsklinikums Augsburg extrahiert. 1686 Einzelbilder wurden für die Kategorien Submukosa, Blutgefäß, Dissektionsmesser und endoskopisches Instrument annotiert und segmentiert. Mit diesem Datensatz wurde ein DeepLabv3+neuronales Netzwerk auf der Basis eines ResNet mit 101 Schichten trainiert und intern anhand der Parameter Intersection over Union (IoU), Dice Score und Pixel Accuracy validiert. Die Fähigkeit des Algorithmus zur Gefäßdetektion wurde anhand von 24 Videoclips mit einer Spieldauer von 7 bis 46 Sekunden mit 33 vordefinierten Gefäßen evaluiert. Anhand dieses Tests wurde auch die Gefäßdetektionsrate eines Experten in der Third-Space Endoskopie ermittelt. Ergebnisse Der Algorithmus zeigte eine Gefäßdetektionsrate von 93,94% mit einer mittleren Rate an falsch positiven Signalen von 1,87 pro Minute. Die Gefäßdetektionsrate des Experten lag bei 90,1% ohne falsch positive Ergebnisse. In der internen Validierung an Einzelbildern wurde eine IoU von 63,47%, ein mittlerer Dice Score von 76,18% und eine Pixel Accuracy von 86,61% ermittelt. Zusammenfassung Dies ist der erste KI-Algorithmus, der für den Einsatz in der therapeutischen Endoskopie entwickelt wurde. Präliminäre Ergebnisse deuten auf eine mit Experten vergleichbare Detektion von Gefäßen während der Untersuchung hin. Weitere Untersuchungen sind nötig, um die Leistung des Algorithmus im Vergleich zum Experten genauer zu eruieren sowie einen möglichen klinischen Nutzen zu ermitteln. KW - Deep Learning KW - Third-Space Endoscopy Y1 - 2022 U6 - https://doi.org/10.1055/s-0042-1745652 VL - 60 IS - 04 PB - Thieme CY - Stuttgart ER - TY - JOUR A1 - Souza Jr., Luis Antonio de A1 - Pacheco, André G.C. A1 - Passos, Leandro A. A1 - Santana, Marcos Cleison S. A1 - Mendel, Robert A1 - Ebigbo, Alanna A1 - Probst, Andreas A1 - Messmann, Helmut A1 - Palm, Christoph A1 - Papa, João Paulo T1 - DeepCraftFuse: visual and deeply-learnable features work better together for esophageal cancer detection in patients with Barrett’s esophagus JF - Neural Computing and Applications N2 - Limitations in computer-assisted diagnosis include lack of labeled data and inability to model the relation between what experts see and what computers learn. Even though artificial intelligence and machine learning have demonstrated remarkable performances in medical image computing, their accountability and transparency level must be improved to transfer this success into clinical practice. The reliability of machine learning decisions must be explained and interpreted, especially for supporting the medical diagnosis. While deep learning techniques are broad so that unseen information might help learn patterns of interest, human insights to describe objects of interest help in decision-making. This paper proposes a novel approach, DeepCraftFuse, to address the challenge of combining information provided by deep networks with visual-based features to significantly enhance the correct identification of cancerous tissues in patients affected with Barrett’s esophagus (BE). We demonstrate that DeepCraftFuse outperforms state-of-the-art techniques on private and public datasets, reaching results of around 95% when distinguishing patients affected by BE that is either positive or negative to esophageal cancer. KW - Deep Learning KW - Speiseröhrenkrebs KW - Adenocarcinom KW - Endobrachyösophagus KW - Diagnose KW - Maschinelles Lernen KW - Machine learning KW - Adenocarcinoma KW - Object detector KW - Barrett’s esophagus KW - Deep Learning Y1 - 2024 U6 - https://doi.org/10.1007/s00521-024-09615-z VL - 36 SP - 10445 EP - 10459 PB - Springer CY - London ER - TY - JOUR A1 - Souza, Luis A. A1 - Pacheco, André G.C. A1 - de Souza, Alberto F. A1 - Oliveira-Santos, Thiago A1 - Badue, Claudine A1 - Palm, Christoph A1 - Papa, João Paulo T1 - TransConv: a lightweight architecture based on transformers and convolutional neural networks for adenocarcinoma and Barrett’s esophagus identification JF - Neural Computing and Applications N2 - Barrett’s esophagus, also known as BE, is commonly associated with repeated exposure to stomach acid. If not treated properly, it may evolve into esophageal adenocarcinoma, aka esophageal cancer. This paper proposes TransConv, a hybrid architecture that benefits from features learned by pre-trained vision transformers (ViTs) and convolutional neural networks (CNNs), followed by a shallow neural network composed of three normalizations, ReLU activations, and fully connected layers, and a SoftMax head to distinguish between BE and esophageal cancer. TransConv is designed to be training-lightweight, and for the ViT and CNN backbone models, weights are kept frozen during training, i.e., the primary goal of TransConv is to learn the weights of the fully connected layer from both backbones only, avoiding the burden of updating their weights but still learning their final descriptions for the lightweight convolutional model. We report promising results with low computational training costs in two datasets, one public and another private. From our achievements, TransConv was able to deliver balanced accuracy results around 85% and 86% for each evaluated dataset, respectively, in a design that required only 50 epochs of model training, a very reduced number compared to state-of-the-art conducted studies in the same domain. Y1 - 2025 U6 - https://doi.org/10.1007/s00521-025-11299-y IS - 37 SP - 15535 EP - 15546 PB - Springer ER - TY - INPR A1 - Rückert, Tobias A1 - Rückert, Daniel A1 - Palm, Christoph T1 - Methods and datasets for segmentation of minimally invasive surgical instruments in endoscopic images and videos: A review of the state of the art N2 - In the field of computer- and robot-assisted minimally invasive surgery, enormous progress has been made in recent years based on the recognition of surgical instruments in endoscopic images. Especially the determination of the position and type of the instruments is of great interest here. Current work involves both spatial and temporal information with the idea, that the prediction of movement of surgical tools over time may improve the quality of final segmentations. The provision of publicly available datasets has recently encouraged the development of new methods, mainly based on deep learning. In this review, we identify datasets used for method development and evaluation, as well as quantify their frequency of use in the literature. We further present an overview of the current state of research regarding the segmentation and tracking of minimally invasive surgical instruments in endoscopic images. The paper focuses on methods that work purely visually without attached markers of any kind on the instruments, taking into account both single-frame segmentation approaches as well as those involving temporal information. A discussion of the reviewed literature is provided, highlighting existing shortcomings and emphasizing available potential for future developments. The publications considered were identified through the platforms Google Scholar, Web of Science, and PubMed. The search terms used were "instrument segmentation", "instrument tracking", "surgical tool segmentation", and "surgical tool tracking" and result in 408 articles published between 2015 and 2022 from which 109 were included using systematic selection criteria. Y1 - 2023 U6 - https://doi.org/10.48550/arXiv.2304.13014 ER - TY - CHAP A1 - Gutbrod, Max A1 - Rauber, David A1 - Weber Nunes, Danilo A1 - Palm, Christoph T1 - OpenMIBOOD: Open Medical Imaging Benchmarks for Out-Of-Distribution Detection T2 - 2025 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 10.-17. June 2025, Nashville N2 - The growing reliance on Artificial Intelligence (AI) in critical domains such as healthcare demands robust mechanisms to ensure the trustworthiness of these systems, especially when faced with unexpected or anomalous inputs. This paper introduces the Open Medical Imaging Benchmarks for Out-Of-Distribution Detection (OpenMIBOOD), a comprehensive framework for evaluating out-of-distribution (OOD) detection methods specifically in medical imaging contexts. OpenMIBOOD includes three benchmarks from diverse medical domains, encompassing 14 datasets divided into covariate-shifted in-distribution, nearOOD, and far-OOD categories. We evaluate 24 post-hoc methods across these benchmarks, providing a standardized reference to advance the development and fair comparison of OODdetection methods. Results reveal that findings from broad-scale OOD benchmarks in natural image domains do not translate to medical applications, underscoring the critical need for such benchmarks in the medical field. By mitigating the risk of exposing AI models to inputs outside their training distribution, OpenMIBOOD aims to support the advancement of reliable and trustworthy AI systems in healthcare. The repository is available at https://github.com/remic-othr/OpenMIBOOD. KW - Benchmark testing KW - Reliability KW - Trustworthiness KW - out-of-distribution Y1 - 2025 UR - https://openaccess.thecvf.com/content/CVPR2025/html/Gutbrod_OpenMIBOOD_Open_Medical_Imaging_Benchmarks_for_Out-Of-Distribution_Detection_CVPR_2025_paper.html SN - 979-8-3315-4364-8 U6 - https://doi.org/10.1109/CVPR52734.2025.02410 N1 - Die Preprint-Version ist ebenfalls in diesem Repositorium verzeichnet unter: https://opus4.kobv.de/opus4-oth-regensburg/8059 SP - 25874 EP - 25886 PB - IEEE ER - TY - INPR A1 - Rückert, Tobias A1 - Rauber, David A1 - Maerkl, Raphaela A1 - Klausmann, Leonard A1 - Yildiran, Suemeyye R. A1 - Gutbrod, Max A1 - Nunes, Danilo Weber A1 - Moreno, Alvaro Fernandez A1 - Luengo, Imanol A1 - Stoyanov, Danail A1 - Toussaint, Nicolas A1 - Cho, Enki A1 - Kim, Hyeon Bae A1 - Choo, Oh Sung A1 - Kim, Ka Young A1 - Kim, Seong Tae A1 - Arantes, Gonçalo A1 - Song, Kehan A1 - Zhu, Jianjun A1 - Xiong, Junchen A1 - Lin, Tingyi A1 - Kikuchi, Shunsuke A1 - Matsuzaki, Hiroki A1 - Kouno, Atsushi A1 - Manesco, João Renato Ribeiro A1 - Papa, João Paulo A1 - Choi, Tae-Min A1 - Jeong, Tae Kyeong A1 - Park, Juyoun A1 - Alabi, Oluwatosin A1 - Wei, Meng A1 - Vercauteren, Tom A1 - Wu, Runzhi A1 - Xu, Mengya A1 - an Wang, A1 - Bai, Long A1 - Ren, Hongliang A1 - Yamlahi, Amine A1 - Hennighausen, Jakob A1 - Maier-Hein, Lena A1 - Kondo, Satoshi A1 - Kasai, Satoshi A1 - Hirasawa, Kousuke A1 - Yang, Shu A1 - Wang, Yihui A1 - Chen, Hao A1 - Rodríguez, Santiago A1 - Aparicio, Nicolás A1 - Manrique, Leonardo A1 - Lyons, Juan Camilo A1 - Hosie, Olivia A1 - Ayobi, Nicolás A1 - Arbeláez, Pablo A1 - Li, Yiping A1 - Khalil, Yasmina Al A1 - Nasirihaghighi, Sahar A1 - Speidel, Stefanie A1 - Rückert, Daniel A1 - Feussner, Hubertus A1 - Wilhelm, Dirk A1 - Palm, Christoph T1 - Comparative validation of surgical phase recognition, instrument keypoint estimation, and instrument instance segmentation in endoscopy: Results of the PhaKIR 2024 challenge N2 - Reliable recognition and localization of surgical instruments in endoscopic video recordings are foundational for a wide range of applications in computer- and robot-assisted minimally invasive surgery (RAMIS), including surgical training, skill assessment, and autonomous assistance. However, robust performance under real-world conditions remains a significant challenge. Incorporating surgical context - such as the current procedural phase - has emerged as a promising strategy to improve robustness and interpretability. To address these challenges, we organized the Surgical Procedure Phase, Keypoint, and Instrument Recognition (PhaKIR) sub-challenge as part of the Endoscopic Vision (EndoVis) challenge at MICCAI 2024. We introduced a novel, multi-center dataset comprising thirteen full-length laparoscopic cholecystectomy videos collected from three distinct medical institutions, with unified annotations for three interrelated tasks: surgical phase recognition, instrument keypoint estimation, and instrument instance segmentation. Unlike existing datasets, ours enables joint investigation of instrument localization and procedural context within the same data while supporting the integration of temporal information across entire procedures. We report results and findings in accordance with the BIAS guidelines for biomedical image analysis challenges. The PhaKIR sub-challenge advances the field by providing a unique benchmark for developing temporally aware, context-driven methods in RAMIS and offers a high-quality resource to support future research in surgical scene understanding. Y1 - 2025 N1 - Der Aufsatz wurde peer-reviewed veröffentlicht und ist ebenfalls in diesem Repositorium verzeichnet unter: https://opus4.kobv.de/opus4-oth-regensburg/frontdoor/index/index/start/0/rows/10/sortfield/score/sortorder/desc/searchtype/simple/query/10.1016%2Fj.media.2026.103945/docId/8846 ER - TY - GEN A1 - Rückert, Tobias A1 - Rückert, Daniel A1 - Palm, Christoph T1 - Corrigendum to “Methods and datasets for segmentation of minimally invasive surgical instruments in endoscopic images and videos: A review of the state of the art” [Comput. Biol. Med. 169 (2024) 107929] T2 - Computers in Biology and Medicine N2 - The authors regret that the SAR-RARP50 dataset is missing from the description of publicly available datasets presented in Chapter 4. Y1 - 2024 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:898-opus4-70337 N1 - Aufsatz unter: https://opus4.kobv.de/opus4-oth-regensburg/frontdoor/index/index/docId/6983 PB - Elsevier ER -