TY - GEN A1 - Scheppach, Markus W. A1 - Weber Nunes, Danilo A1 - Arizi, X. A1 - Rauber, David A1 - Probst, Andreas A1 - Nagl, Sandra A1 - Römmele, Christoph A1 - Palm, Christoph A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - Single frame workflow recognition during endoscopic submucosal dissection (ESD) using artificial intelligence (AI) T2 - Endoscopy N2 - Aims  Precise surgical phase recognition and evaluation may improve our understanding of complex endoscopic procedures. Furthermore, quality control measurements and endoscopy training could benefit from objective descriptions of surgical phase distributions. Therefore, we aimed to develop an artificial intelligence algorithm for frame-by-frame operational phase recognition during endoscopic submucosal dissection (ESD). Methods  Full length ESD-videos from 31 patients comprising 6.297.782 single images were collected retrospectively. Videos were annotated on a frame-by-frame basis for the operational macro-phases diagnostics, marking, injection, dissection and bleeding. Further subphases were the application of electrical current, visible injection of fluid into the submucosal space and scope manipulation, leading to 11 phases in total. 4.975.699 frames (21 patients) were used for training of a video swin transformer using uniform frame sampling for temporal information. Hyperparameter tuning was performed with 897.325 further frames (6 patients), while 424.758 frames (4 patients) were used for validation. Results  The overall F1 scores on the test dataset for the macro-phases and all 11 phases were 0.96 and 0.90, respectively. The recall values for diagnostics, marking, injection, dissection and bleeding were 1.00, 1.00, 0.95, 0.96 and 0.93, respectively. Conclusions  The algorithm classified operational phases during ESD with high accuracy. A precise evaluation of phase distribution may allow for the development of objective quality metrics for quality control and training. Y1 - 2025 U6 - https://doi.org/10.1055/s-0045-1806324 VL - 57 IS - S 02 SP - S511 PB - Thieme CY - Stuttgart ER - TY - GEN A1 - Zellmer, Stephan A1 - Rauber, David A1 - Probst, Andreas A1 - Weber, Tobias A1 - Braun, Georg A1 - Nagl, Sandra A1 - Römmele, Christoph A1 - Schnoy, Elisabeth A1 - Birzle, Lisa A1 - Aehling, Niklas A1 - Schulz, Dominik Andreas Helmut Otto A1 - Palm, Christoph A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - Künstliche Intelligenz als Hilfsmittel zur Detektion der Papilla duodeni major und des papillären Ostiums während der ERCP T2 - Zeitschrift für Gastroenterologie N2 - Einleitung  Die Endoskopische Retrograde Cholangiopankreatikographie (ERCP) ist der Goldstandard in der endoskopischen Therapie von Erkrankungen des pankreatobiliären Trakts. Allerdings ist sie technisch anspruchsvoll, schwer zu erlernen und mit einer relativ hohen Komplikationsrate assoziiert. Daher soll in der vorliegenden Machbarkeitsstudie geprüft werden, ob mithilfe eines Deeplearning- Algorithmus die Papille und das Ostium zuverlässig detektiert werden können und dieser für Endoskopiker, insbesondere in der Ausbildungssituation, ein geeignetes Hilfsmittel darstellen könnte. Material und Methodik Insgesamt wurden 1534 ERCP-Bilder von 134 Patienten analysiert, wobei sowohl die Papilla duodeni major als auch das Ostium segmentiert wurden. Anschließend erfolgte das Training eines neuronalen Netzes unter Verwendung eines Deep-Learning-Algorithmus. Für den Test des Algorithmus erfolgte eine fünffache Kreuzvalidierung. Ergebnisse  Auf den 1534 gelabelten Bildern wurden für die Klasse Papille ein F1-Wert von 0,7996, eine Sensitivität von 0,8488 und eine Spezifität von 0,9822 erzielt. Für die Klasse Ostium ergaben sich ein F1-Wert von 0,5198, eine Sensitivität von 0,5945 und eine Spezifität von 0,9974. Klassenübergreifend (Klasse Papille und Klasse Ostium) betrug der F1-Wert 0,6593, die Sensitivität 0,7216 und für die Spezifität 0,9898. Zusammenfassung  In der vorliegenden Machbarkeitsstudie zeigte das neuronale Netz eine hohe Sensitivität und eine sehr hohe Spezifität bei der Identifikation der Papilla duodeni major. Die Detektion des Ostiums erfolgte hingegen mit einer deutlich geringeren Sensitivität. Zukünftig ist eine Erweiterung des Trainingsdatensatzes um Videos und klinische Daten vorgesehen, um die Leistungsfähigkeit des Netzwerks zu verbessern. Hierdurch könnte langfristig ein geeignetes Assistenzsystem für die ERCP, insbesondere in der Ausbildungssituation etabliert werden. Y1 - 2025 U6 - https://doi.org/10.1055/s-0045-1806882 VL - 63 IS - 5 SP - e295 PB - Thieme CY - Stuttgart ER - TY - JOUR A1 - Scheppach, Markus W. A1 - Mendel, Robert A1 - Muzalyova, Anna A1 - Rauber, David A1 - Probst, Andreas A1 - Nagl, Sandra A1 - Römmele, Christoph A1 - Yip, Hon Chi A1 - Lau, Louis Ho Shing A1 - Gölder, Stefan Karl A1 - Schmidt, Arthur A1 - Kouladouros, Konstantinos A1 - Abdelhafez, Mohamed A1 - Walter, Benjamin M. A1 - Meinikheim, Michael A1 - Chiu, Philip Wai Yan A1 - Palm, Christoph A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - Artificial intelligence improves submucosal vessel detection during third space endoscopy JF - Endoscopy N2 - Background and study aims: While artificial intelligence (AI) shows high potential in decision support for diagnostic gastrointestinal endoscopy, its role in therapeutic endoscopy remains unclear. Third space endoscopic procedures pose the risk of intraprocedural bleeding. Therefore, we aimed to develop an AI algorithm for intraprocedural blood vessel detection. Patients and Methods: Using a test dataset with 101 standardized video clips containing 200 predefined submucosal blood vessels, 19 endoscopists were evaluated for the vessel detection rate (VDR) and time (VDT) with and without support of an AI algorithm. Test subjects were grouped according to experience in ESD. Results: With AI support, endoscopists VDR increased from 56.4% [CI 54.1–58.6] to 72.4% [CI 70.3–74.4]. Endoscopists‘ VDT dropped from 6.7sec [CI 6.2-7.1] to 5.2sec [CI 4.8-5.7]. False positive (FP) readings appeared in 4.5% of frames and were marked significantly shorter than true positives (6.0sec [CI 5.28-6.70] vs. 0.7sec [CI 0.55-0.87]). Conclusions: AI improved the vessel detection rate and time of endoscopists during third space endoscopy. While these data need to be corroborated by clinical trials, AI may prove to be an invaluable tool for the improvement of endoscopic interventions. KW - Artificial Intelligence KW - Third Space Endoscopy Y1 - 2025 U6 - https://doi.org/10.1055/a-2534-1164 PB - Thieme CY - Stuttgart ER - TY - JOUR A1 - Maerkl, Raphaela A1 - Rueckert, Tobias A1 - Rauber, David A1 - Gutbrod, Max A1 - Weber Nunes, Danilo A1 - Palm, Christoph T1 - Enhancing generalization in zero-shot multi-label endoscopic instrument classification JF - International Journal of Computer Assisted Radiology and Surgery N2 - Purpose Recognizing previously unseen classes with neural networks is a significant challenge due to their limited generalization capabilities. This issue is particularly critical in safety-critical domains such as medical applications, where accurate classification is essential for reliability and patient safety. Zero-shot learning methods address this challenge by utilizing additional semantic data, with their performance relying heavily on the quality of the generated embeddings. Methods This work investigates the use of full descriptive sentences, generated by a Sentence-BERT model, as class representations, compared to simpler category-based word embeddings derived from a BERT model. Additionally, the impact of z-score normalization as a post-processing step on these embeddings is explored. The proposed approach is evaluated on a multi-label generalized zero-shot learning task, focusing on the recognition of surgical instruments in endoscopic images from minimally invasive cholecystectomies. Results The results demonstrate that combining sentence embeddings and z-score normalization significantly improves model performance. For unseen classes, the AUROC improves from 43.9% to 64.9%, and the multi-label accuracy from 26.1% to 79.5%. Overall performance measured across both seen and unseen classes improves from 49.3% to 64.9% in AUROC and from 37.3% to 65.1% in multi-label accuracy, highlighting the effectiveness of our approach. Conclusion These findings demonstrate that sentence embeddings and z-score normalization can substantially enhance the generalization performance of zero-shot learning models. However, as the study is based on a single dataset, future work should validate the method across diverse datasets and application domains to establish its robustness and broader applicability. KW - Generalized zero-shot learning KW - Sentence embeddings KW - Z-score normalization KW - Multi-label classification KW - Surgical instruments Y1 - 2025 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:898-opus4-85674 N1 - Corresponding author der OTH Regensburg: Raphaela Maerkl VL - 20 SP - 1577 EP - 1587 PB - Springer Nature ER - TY - GEN A1 - Scheppach, Markus W. A1 - Weber Nunes, Danilo A1 - Rauber, David A1 - Arizi, X. A1 - Probst, Andreas A1 - Nagl, Sandra A1 - Römmele, Christoph A1 - Ebigbo, Alanna A1 - Palm, Christoph A1 - Messmann, Helmut T1 - Künstliche Intelligenz-basierte Erkennung von interventionellen Phasen bei der endoskopischen Submukosadissektion T2 - Zeitschrift für Gastroenterologie N2 - Einleitung: Die endoskopische Submukosadissektion (ESD) ist ein komplexes endoskopisches Verfahren, das technische Expertise erfordert. Objektive Methoden zur Analyse von interventionellen Abläufen bei ESD könnten für Qualitätssicherung und Ausbildung, wie auch eine automatische Befunderstellung von Nutzen sein. Ziele: In dieser Studie wurde ein KI-Algorithmus für die Erkennung und Klassifizierung der interventionellen Phasen der ESD entwickelt, um die technische Basis für eine standardisierte Leistungsbewertung und automatische Befunderstellung zu schaffen. Methodik: Vollständige ESD-Videoaufnahmen von 49 Patienten wurden retrospektiv zusammengestellt. Der Datensatz umfasste 6.390.151 Einzelbilder, die alle für die folgenden interventionellen Phasen annotiert wurden: Diagnostik, Markierung, Injektion, Dissektion und Hämostase. 3.973.712 Bilder (28 Patienten) wurden für das Training eines Video-Swin-Transformers genutzt. Dabei wurde temporale Information durch standardisierte BIldextraktion in festgelegten zeitlichen Abständen zum analysierten Bild inkorporiert. 2.416.439 separate Bilder (21 Patienten) wurden für eine interne Validierung genutzt. Ergebnis: Bei der internen Evaluation erreichte das System insgesamt einen F1-Wert von 0,88. Es wurden F1-Werte von 0,99, 0,89, 0,89, 0,91 und 0,52 für Diagnostik, Markierung, Injektion, Dissektion bzw. Blutungsmanagement gemessen. Die Sensitivitäten für dieselben Parameter betrugen 1,00, 0,80, 0,94, 0,89 und 0,67, die Spezifitäten lagen bei 1,00, 1,00, 0,98, 0,88 und 0,93. Positive prädiktive Werte wurden mit 0,98, 1,00, 0,85, 0,94 und 0,43 gemessen. Schlussfolgerung: In dieser vorläufigen Studie zeigte ein KI-Algorithmus eine hohe Leistungsfähigkeit für die Einzelbild-Erkennung von Verfahrensphasen während der ESD. Die vergleichsweise niedrige Leistung für die Blutungsphase wurde auf das seltene Auftreten von Blutungsepisoden im Trainingsdatensatz zurückgeführt, der zu diesem Zeitpunkt nur Videos in voller Länge umfasste. Die zukünftige Entwicklung des Algorithmus wird sich auf die Reduzierung von Klassenungleichgewichten durch selektive Annotationsprotokolle konzentrieren. Y1 - 2025 U6 - https://doi.org/10.1055/s-0045-1811093 VL - 63 IS - 08 SP - e612 EP - e613 PB - Thieme CY - Stuttgart ER - TY - GEN A1 - Scheppach, Markus W. A1 - Rauber, David A1 - Zingler, C. A1 - Weber Nunes, Danilo A1 - Probst, Andreas A1 - Römmele, Christoph A1 - Nagl, Sandra A1 - Ebigbo, Alanna A1 - Palm, Christoph A1 - Messmann, Helmut T1 - Instrumentenerkennung während der endoskopischen Submukosadissektion mittels künstlicher Intelligenz T2 - Zeitschrift für Gastroenterologie N2 - Einleitung: Die endoskopische Submukosadissektion (ESD) ist eine komplexe Technik zur Resektion gastrointestinaler Frühneoplasien. Dabei werden für die verschiedenen Schritte der Intervention spezifische endoskopische Instrumente verwendet. Die präzise und automatische Erkennung und Abgrenzung der verwendeten Instrumente (Injektionsnadeln, elektrochirurgische Messer mit unterschiedlichen Konfigurationen, hämostatische Zangen) könnte wertvolle Informationen über den Fortschritt und die Verfahrensmerkmale der ESD liefern und eine automatische standardisierte Berichterstattung ermöglichen. Ziele: Ziel dieser Studie war die Entwicklung eines KI-Algorithmus zur Erkennung und Delineation von endoskopischen Instrumenten bei der ESD. Methodik: 17 ESD-Videos (9×rektal, 5×ösophageal, 3×gastrisch) wurden retrospektiv zusammengestellt. Auf 8530 Einzelbilder dieser Videos wurden durch 2 Studienmitarbeiter die folgenden Klassen eingezeichnet: Hakenmesser – Spitze, Hakenmesser – Katheter, Nadelmesser – Spitze und – Katheter, Injektionsnadel -Spitze und – Katheter sowie hämostatische Zange – Spitze und – Katheter. Der annotierte Datensatz wurde zum Training eines DeepLabV3+-Deep-Learning-Algorithmus mit ConvNeXt-Backbone zur Erkennung und Abgrenzung der genannten Klassen verwendet. Die Evaluation erfolgte durch 5-fache interne Kreuzvalidierung. Ergebnis: Die Validierung auf Einzelpixelbasis ergab insgesamt einen F1-Score von 0,80, eine Sensitivität von 0,81 und eine Spezifität von 1,00. Es wurden F1-Scores von 1,00, 0,97, 0,80, 0,98, 0,85, 0,97, 0,80, 0,51 bzw. 0,85 für die Klassen Hakenmesser – Katheter und – Spitze, Nadelmesser – Katheter und – Spitze, Injektionsnadel – Katheter und – Spitze, hämostatische Zange – Katheter und – Spitze gemessen. Schlussfolgerung: In dieser Studie wurden die wichtigsten endoskopischen Instrumente, die während der ESD verwendet werden, mit hoher Genauigkeit erkannt. Die geringere Leistung bei der hämostatische Zange – Katheter kann auf die Unterrepräsentation dieser Klassen in den Trainingsdaten zurückgeführt werden. Zukünftige Studien werden sich auf die Erweiterung der Instrumentenklassen sowie auf die Ausbalancierung der Trainingsdaten konzentrieren. Y1 - 2025 U6 - https://doi.org/10.1055/s-0045-1811092 VL - 63 IS - 8 PB - Thieme ER - TY - CHAP A1 - Klausmann, Leonard A1 - Rueckert, Tobias A1 - Rauber, David A1 - Maerkl, Raphaela A1 - Yildiran, Suemeyye R. A1 - Gutbrod, Max A1 - Palm, Christoph T1 - DIY challenge blueprint: from organization to technical realization in biomedical image analysis T2 - Medical Image Computing and Computer Assisted Intervention - MICCAI 2025 ; Proceedings Part XI N2 - Biomedical image analysis challenges have become the de facto standard for publishing new datasets and benchmarking different state-of-the-art algorithms. Most challenges use commercial cloud-based platforms, which can limit custom options and involve disadvantages such as reduced data control and increased costs for extended functionalities. In contrast, Do-It-Yourself (DIY) approaches have the capability to emphasize reliability, compliance, and custom features, providing a solid basis for low-cost, custom designs in self-hosted systems. Our approach emphasizes cost efficiency, improved data sovereignty, and strong compliance with regulatory frameworks, such as the GDPR. This paper presents a blueprint for DIY biomedical imaging challenges, designed to provide institutions with greater autonomy over their challenge infrastructure. Our approach comprehensively addresses both organizational and technical dimensions, including key user roles, data management strategies, and secure, efficient workflows. Key technical contributions include a modular, containerized infrastructure based on Docker, integration of open-source identity management, and automated solution evaluation workflows. Practical deployment guidelines are provided to facilitate implementation and operational stability. The feasibility and adaptability of the proposed framework are demonstrated through the MICCAI 2024 PhaKIR challenge with multiple international teams submitting and validating their solutions through our self-hosted platform. This work can be used as a baseline for future self-hosted DIY implementations and our results encourage further studies in the area of biomedical image analysis challenges. KW - Biomedical challenges KW - Image analysis KW - Blueprint KW - Do-It-Yourself KW - Self-hosting Y1 - 2025 SN - 978-3-032-05141-7 U6 - https://doi.org/10.1007/978-3-032-05141-7_9 SP - 85 EP - 95 PB - Springer CY - Cham ER - TY - INPR A1 - Gutbrod, Max A1 - Rauber, David A1 - Weber Nunes, Danilo A1 - Palm, Christoph T1 - OpenMIBOOD: Open Medical Imaging Benchmarks for Out-Of-Distribution Detection N2 - The growing reliance on Artificial Intelligence (AI) in critical domains such as healthcare demands robust mechanisms to ensure the trustworthiness of these systems, especially when faced with unexpected or anomalous inputs. This paper introduces the Open Medical Imaging Benchmarks for Out-Of-Distribution Detection (OpenMIBOOD), a comprehensive framework for evaluating out-of-distribution (OOD) detection methods specifically in medical imaging contexts. OpenMIBOOD includes three benchmarks from diverse medical domains, encompassing 14 datasets divided into covariate-shifted in-distribution, near-OOD, and far-OOD categories. We evaluate 24 post-hoc methods across these benchmarks, providing a standardized reference to advance the development and fair comparison of OOD detection methods. Results reveal that findings from broad-scale OOD benchmarks in natural image domains do not translate to medical applications, underscoring the critical need for such benchmarks in the medical field. By mitigating the risk of exposing AI models to inputs outside their training distribution, OpenMIBOOD aims to support the advancement of reliable and trustworthy AI systems in healthcare. The repository is available at this https URL. Y1 - 2025 U6 - https://doi.org/10.48550/arXiv.2503.16247 N1 - Der Aufsatz wurde peer-reviewed veröffentlicht und ist ebenfalls in diesem Repositorium verzeichnet unter: https://opus4.kobv.de/opus4-oth-regensburg/8467 ER - TY - CHAP A1 - Weber Nunes, Danilo A1 - Rauber, David A1 - Palm, Christoph ED - Palm, Christoph ED - Breininger, Katharina ED - Deserno, Thomas M. ED - Handels, Heinz ED - Maier, Andreas ED - Maier-Hein, Klaus H. ED - Tolxdorff, Thomas T1 - Self-supervised 3D Vision Transformer Pre-training for Robust Brain Tumor Classification T2 - Bildverarbeitung für die Medizin 2025: Proceedings, German Conference on Medical Image Computing, Regensburg March 09-11, 2025 N2 - Brain tumors pose significant challenges in neurology, making precise classification crucial for prognosis and treatment planning. This work investigates the effectiveness of a self-supervised learning approach–masked autoencoding (MAE)–to pre-train a vision transformer (ViT) model for brain tumor classification. Our method uses non-domain specific data, leveraging the ADNI and OASIS-3 MRI datasets, which primarily focus on degenerative diseases, for pretraining. The model is subsequently fine-tuned and evaluated on the BraTS glioma and meningioma datasets, representing a novel use of these datasets for tumor classification. The pre-trained MAE ViT model achieves an average F1 score of 0.91 in a 5-fold cross-validation setting, outperforming the nnU-Net encoder trained from scratch, particularly under limited data conditions. These findings highlight the potential of self-supervised MAE in enhancing brain tumor classification accuracy, even with restricted labeled data. Y1 - 2025 U6 - https://doi.org/10.1007/978-3-658-47422-5_69 SP - 298 EP - 303 PB - Springer Vieweg CY - Wiesbaden ER - TY - CHAP A1 - Gutbrod, Max A1 - Rauber, David A1 - Weber Nunes, Danilo A1 - Palm, Christoph T1 - OpenMIBOOD: Open Medical Imaging Benchmarks for Out-Of-Distribution Detection T2 - 2025 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 10.-17. June 2025, Nashville N2 - The growing reliance on Artificial Intelligence (AI) in critical domains such as healthcare demands robust mechanisms to ensure the trustworthiness of these systems, especially when faced with unexpected or anomalous inputs. This paper introduces the Open Medical Imaging Benchmarks for Out-Of-Distribution Detection (OpenMIBOOD), a comprehensive framework for evaluating out-of-distribution (OOD) detection methods specifically in medical imaging contexts. OpenMIBOOD includes three benchmarks from diverse medical domains, encompassing 14 datasets divided into covariate-shifted in-distribution, nearOOD, and far-OOD categories. We evaluate 24 post-hoc methods across these benchmarks, providing a standardized reference to advance the development and fair comparison of OODdetection methods. Results reveal that findings from broad-scale OOD benchmarks in natural image domains do not translate to medical applications, underscoring the critical need for such benchmarks in the medical field. By mitigating the risk of exposing AI models to inputs outside their training distribution, OpenMIBOOD aims to support the advancement of reliable and trustworthy AI systems in healthcare. The repository is available at https://github.com/remic-othr/OpenMIBOOD. KW - Benchmark testing KW - Reliability KW - Trustworthiness KW - out-of-distribution Y1 - 2025 UR - https://openaccess.thecvf.com/content/CVPR2025/html/Gutbrod_OpenMIBOOD_Open_Medical_Imaging_Benchmarks_for_Out-Of-Distribution_Detection_CVPR_2025_paper.html SN - 979-8-3315-4364-8 U6 - https://doi.org/10.1109/CVPR52734.2025.02410 N1 - Die Preprint-Version ist ebenfalls in diesem Repositorium verzeichnet unter: https://opus4.kobv.de/opus4-oth-regensburg/8059 SP - 25874 EP - 25886 PB - IEEE ER - TY - INPR A1 - Rückert, Tobias A1 - Rauber, David A1 - Maerkl, Raphaela A1 - Klausmann, Leonard A1 - Yildiran, Suemeyye R. A1 - Gutbrod, Max A1 - Nunes, Danilo Weber A1 - Moreno, Alvaro Fernandez A1 - Luengo, Imanol A1 - Stoyanov, Danail A1 - Toussaint, Nicolas A1 - Cho, Enki A1 - Kim, Hyeon Bae A1 - Choo, Oh Sung A1 - Kim, Ka Young A1 - Kim, Seong Tae A1 - Arantes, Gonçalo A1 - Song, Kehan A1 - Zhu, Jianjun A1 - Xiong, Junchen A1 - Lin, Tingyi A1 - Kikuchi, Shunsuke A1 - Matsuzaki, Hiroki A1 - Kouno, Atsushi A1 - Manesco, João Renato Ribeiro A1 - Papa, João Paulo A1 - Choi, Tae-Min A1 - Jeong, Tae Kyeong A1 - Park, Juyoun A1 - Alabi, Oluwatosin A1 - Wei, Meng A1 - Vercauteren, Tom A1 - Wu, Runzhi A1 - Xu, Mengya A1 - an Wang, A1 - Bai, Long A1 - Ren, Hongliang A1 - Yamlahi, Amine A1 - Hennighausen, Jakob A1 - Maier-Hein, Lena A1 - Kondo, Satoshi A1 - Kasai, Satoshi A1 - Hirasawa, Kousuke A1 - Yang, Shu A1 - Wang, Yihui A1 - Chen, Hao A1 - Rodríguez, Santiago A1 - Aparicio, Nicolás A1 - Manrique, Leonardo A1 - Lyons, Juan Camilo A1 - Hosie, Olivia A1 - Ayobi, Nicolás A1 - Arbeláez, Pablo A1 - Li, Yiping A1 - Khalil, Yasmina Al A1 - Nasirihaghighi, Sahar A1 - Speidel, Stefanie A1 - Rückert, Daniel A1 - Feussner, Hubertus A1 - Wilhelm, Dirk A1 - Palm, Christoph T1 - Comparative validation of surgical phase recognition, instrument keypoint estimation, and instrument instance segmentation in endoscopy: Results of the PhaKIR 2024 challenge N2 - Reliable recognition and localization of surgical instruments in endoscopic video recordings are foundational for a wide range of applications in computer- and robot-assisted minimally invasive surgery (RAMIS), including surgical training, skill assessment, and autonomous assistance. However, robust performance under real-world conditions remains a significant challenge. Incorporating surgical context - such as the current procedural phase - has emerged as a promising strategy to improve robustness and interpretability. To address these challenges, we organized the Surgical Procedure Phase, Keypoint, and Instrument Recognition (PhaKIR) sub-challenge as part of the Endoscopic Vision (EndoVis) challenge at MICCAI 2024. We introduced a novel, multi-center dataset comprising thirteen full-length laparoscopic cholecystectomy videos collected from three distinct medical institutions, with unified annotations for three interrelated tasks: surgical phase recognition, instrument keypoint estimation, and instrument instance segmentation. Unlike existing datasets, ours enables joint investigation of instrument localization and procedural context within the same data while supporting the integration of temporal information across entire procedures. We report results and findings in accordance with the BIAS guidelines for biomedical image analysis challenges. The PhaKIR sub-challenge advances the field by providing a unique benchmark for developing temporally aware, context-driven methods in RAMIS and offers a high-quality resource to support future research in surgical scene understanding. Y1 - 2025 N1 - Der Aufsatz wurde peer-reviewed veröffentlicht und ist ebenfalls in diesem Repositorium verzeichnet unter: https://opus4.kobv.de/opus4-oth-regensburg/frontdoor/index/index/start/0/rows/10/sortfield/score/sortorder/desc/searchtype/simple/query/10.1016%2Fj.media.2026.103945/docId/8846 ER - TY - GEN A1 - Rueckert, Tobias A1 - Rauber, David A1 - Klausmann, Leonard A1 - Gutbrod, Max A1 - Rueckert, Daniel A1 - Feussner, Hubertus A1 - Wilhelm, Dirk A1 - Palm, Christoph T1 - PhaKIR Dataset - Surgical Procedure Phase, Keypoint, and Instrument Recognition [Data set] N2 - Note: A script for extracting the individual frames from the video files while preserving the challenge-compliant directory structure and frame-to-mask naming conventions is available on GitHub and can be accessed here: https://github.com/remic-othr/PhaKIR_Dataset. The dataset is described in the following publications: Rueckert, Tobias et al.: Comparative validation of surgical phase recognition, instrument keypoint estimation, and instrument instance segmentation in endoscopy: Results of the PhaKIR 2024 challenge. arXiv preprint, https://arxiv.org/abs/2507.16559. 2025. Rueckert, Tobias et al.: Video Dataset for Surgical Phase, Keypoint, and Instrument Recognition in Laparoscopic Surgery (PhaKIR). arXiv preprint, https://arxiv.org/abs/2511.06549. 2025. The proposed dataset was used as the training dataset in the PhaKIR challenge (https://phakir.re-mic.de/) as part of EndoVis-2024 at MICCAI 2024 and consists of eight real-world videos of human cholecystectomies ranging from 23 to 60 minutes in duration. The procedures were performed by experienced physicians, and the videos were recorded in three hospitals. In addition to existing datasets, our annotations provide pixel-wise instance segmentation masks of surgical instruments for a total of 19 categories, coordinates of relevant instrument keypoints (instrument tip(s), shaft-tip transition, shaft), both at an interval of one frame per second, and specifications regarding the intervention phases for a total of eight different phase categories for each individual frame in one dataset and thus comprehensively cover instrument localization and the context of the operation. Furthermore, the provision of the complete video sequences offers the opportunity to include the temporal information regarding the respective tasks and thus further optimize the resulting methods and outcomes. Y1 - 2025 U6 - https://doi.org/10.5281/zenodo.15740620 ER - TY - GEN A1 - Gutbrod, Max A1 - Rauber, David A1 - Weber Nunes, Danilo A1 - Palm, Christoph T1 - A cleaned subset of the first five CATARACTS test videos [Data set] N2 - This dataset is a subset of the original CATARACTS test dataset and is used by the OpenMIBOOD framework to evaluate a specific out-of-distribution setting. When using this dataset, it is mandatory to cite the corresponding publication (OpenMIBOOD (10.1109/CVPR52734.2025.02410)) and follow the acknowledgement and citation requirements of the original dataset (CATARACTS). The original CATARACTS dataset (associated publication,Homepage) consists of 50 videos of cataract surgeries, split into 25 train and 25 test videos. This subset contains the frames of the first 5 test videos. Further, black frames at the beginning of each video were removed. Y1 - 2025 U6 - https://doi.org/10.5281/zenodo.14924735 N1 - Related works: Is derived from: Dataset: 10.21227/ac97-8m18 (DOI) Software: Repository URL: https://github.com/remic-othr/OpenMIBOOD ER - TY - GEN A1 - Gutbrod, Max A1 - Rauber, David A1 - Weber Nunes, Danilo A1 - Palm, Christoph T1 - Cropped single instrument frames subset from Cholec80 [Data set] N2 - This dataset is a subset of the original Cholec80 dataset and is used by the OpenMIBOOD framework to evaluate a specific out-of-distribution setting. When using this dataset, it is mandatory to cite the corresponding publication (OpenMIBOOD) and to follow the acknowledgement and citation requirements of the original dataset (Cholec80). The original Cholec80 dataset (associated paper,Homepage) consists of 80 cholecystectomy surgery videos recorded at 25 fps, performed by 13 surgeons. It includes phase annotations (25 fps) and tool presence labels (1 fps), with phase definitions provided by a senior surgeon. A tool is considered present if at least half of its tip is visible. The dataset categorizes tools into seven types: Grasper, Bipolar, Hook, Scissors, Clipper, Irrigator, and Specimen bag. Multiple tools may be present in each frame. Additionally, 76 of the 80 videos exhibit a strong black vignette. For this dataset subset, frames were extracted based on tool presence labels, selecting only those containing Grasper, Bipolar, Hook, or Clipper while ensuring that only a single tool appears per frame. To enhance visual consistency, the black vignette was removed by extracting an inner rectangular region, where applicable. KW - Tool Presence Detection KW - Cholecystectomy KW - Laparoscopic KW - Deep Learning KW - Out-Of-Distribution Detection Y1 - 2025 U6 - https://doi.org/10.5281/zenodo.14921670 N1 - Related works Is derived from Journal article: 10.1109/TMI.2016.2593957 Software Repository URL https://github.com/remic-othr/OpenMIBOOD ER -