TY - CHAP A1 - Thelen, Simon A1 - Volbert, Klaus A1 - Nunes, Danilo Weber T1 - A Survey on Algorithmic Problems in Wireless Systems T2 - Proceedings of the 12th International Conference on Sensor Networks (SENSORNETS), Vol 1, Feb 23, 2023 - Feb 24, 2023, Lisbon, Portugal N2 - Considering the ongoing growth of Wireless Sensor Networks (WSNs) and the challenges they pose due to their hardware limitations as well as the intrinsic complexity of their interactions, specialized algorithms have the potential to help solving these challenges. We present a survey on recent developments regarding algorithmic problems which have applications in wireless systems and WSNs in particular. Focusing on the intersection between WSNs and algorithms, we give an overview of recent results inside this intersection, concerning topics such as routing, interference minimization, latency reduction, localization among others. Progress on solving these problems could be potentially beneficial for the industry as a whole by increasing network throughput, reducing latency or making systems more energy-efficient. We summarize and structure these recent developments and list interesting open problems to be investigated in future works. KW - Algorithms KW - WSNs KW - Survey KW - Network Construction KW - Routing KW - Interference KW - Localization KW - Charging KW - Latency Y1 - 2023 SN - 978-989-758-635-4 U6 - https://doi.org/10.5220/0011791200003399 SP - 101 EP - 111 PB - SCITEPRESS - Science and Technology Publications ER - TY - CHAP A1 - Nunes, Danilo Weber A1 - Volbert, Klaus T1 - A Wireless Low-power System for Digital Identification of Examinees (Including Covid-19 Checks) T2 - Proceedings of the 11th International Conference on Sensor Networks (SENSORNETS), 07.02.2022 - 08.02.2022 N2 - Indoor localization has been, for the past decade, a subject under intense development. There is, however, no currently available solution that covers all possible scenarios. Received Signal Strength Indicator (RSSI) based methods, although the most widely researched, still suffer from problems due to environment noise. In this paper, we present a system using Bluetooth Low Energy (BLE) beacons attached to the desks to localize students in exam rooms and, at the same time, automatically register them for the given exam. By using Kalman Filters (KFs) and discretizing the location task, the presented solution is capable of achieving 100% accuracy within a distance of 45cm from the center of the desk. As the pandemic gets more controlled, with our lives slowly transitioning back to normal, there are still sanitary measures being applied. An example being the necessity to show a certification of vaccination or previous disease. Those certifications need to be manually checked for everyone entering the university’s building, which requires time and staff. With that in mind, the automatic check for Covid certificates feature is also built into our system. KW - Indoor Navigation KW - Indoor Localisation KW - Low-power Devices KW - Internet of Things KW - RSSI KW - BLE Beacons Y1 - 2022 SN - 978-989-758-551-7 U6 - https://doi.org/10.5220/0010912800003118 SP - 51 EP - 59 PB - SCITEPRESS - Science and Technology Publications ER - TY - CHAP A1 - Thelen, Simon A1 - Eder, Friedrich A1 - Melzer, Matthias A1 - Nunes, Danilo Weber A1 - Stadler, Michael A1 - Rechenauer, Christian A1 - Obergrießer, Mathias A1 - Jubeh, Ruben A1 - Volbert, Klaus A1 - Dünnweber, Jan T1 - A Slim Digital Twin For A Smart City And Its Residents T2 - SOICT '23: Proceedings of the 12th International Symposium on Information and Communication Technology, 2023, Hi Chi Minh, Vietnam N2 - In the engineering domain, representing real-world objects using a body of data, called a digital twin, which is frequently updated by “live” measurements, has shown various advantages over tradi- tional modelling and simulation techniques. Consequently, urban planners have a strong interest in digital twin technology, since it provides them with a laboratory for experimenting with data before making far-reaching decisions. Realizing these decisions involves the work of professionals in the architecture, engineering and construction (AEC) domain who nowadays collaborate via the methodology of building information modeling (BIM). At the same time, the citizen plays an integral role both in the data acquisition phase, while also being a beneficiary of the improved resource management strategies. In this paper, we present a prototype for a “digital energy twin” platform we designed in cooperation with the city of Regensburg. We show how our extensible platform de- sign can satisfy the various requirements of multiple user groups through a series of data processing solutions and visualizations, in- dicating valuable design and implementation guidelines for future projects. In particular, we focus on two example use cases concern- ing building electricity monitoring and BIM. By implementing a flexible data processing architecture we can involve citizens in the data acquisition process, meeting the demands of modern users regarding maximum transparency in the handling of their data. KW - smart city KW - AI KW - digital twin KW - artificial intelligence KW - urban planning KW - BIM KW - portal system Y1 - 2023 SN - 979-8-4007-0891-6 U6 - https://doi.org/10.1145/3628797.3628936 SP - 8 EP - 15 PB - ACM ER - TY - GEN A1 - Scheppach, Markus W. A1 - Nunes, Danilo Weber A1 - Arizi, X. A1 - Rauber, David A1 - Probst, Andreas A1 - Nagl, Sandra A1 - Römmele, Christoph A1 - Meinikheim, Michael A1 - Palm, Christoph A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - Procedural phase recognition in endoscopic submucosal dissection (ESD) using artificial intelligence (AI) T2 - Endoscopy N2 - Aims Recent evidence suggests the possibility of intraprocedural phase recognition in surgical operations as well as endoscopic interventions such as peroral endoscopic myotomy and endoscopic submucosal dissection (ESD) by AI-algorithms. The intricate measurement of intraprocedural phase distribution may deepen the understanding of the procedure. Furthermore, real-time quality assessment as well as automation of reporting may become possible. Therefore, we aimed to develop an AI-algorithm for intraprocedural phase recognition during ESD. Methods A training dataset of 364385 single images from 9 full-length ESD videos was compiled. Each frame was classified into one procedural phase. Phases included scope manipulation, marking, injection, application of electrical current and bleeding. Allocation of each frame was only possible to one category. This training dataset was used to train a Video Swin transformer to recognize the phases. Temporal information was included via logarithmic frame sampling. Validation was performed using two separate ESD videos with 29801 single frames. Results The validation yielded sensitivities of 97.81%, 97.83%, 95.53%, 85.01% and 87.55% for scope manipulation, marking, injection, electric application and bleeding, respectively. Specificities of 77.78%, 90.91%, 95.91%, 93.65% and 84.76% were measured for the same parameters. Conclusions The developed algorithm was able to classify full-length ESD videos on a frame-by-frame basis into the predefined classes with high sensitivities and specificities. Future research will aim at the development of quality metrics based on single-operator phase distribution. Y1 - 2024 U6 - https://doi.org/10.1055/s-0044-1783804 VL - 56 IS - S 02 SP - S439 PB - Thieme CY - Stuttgart ER - TY - GEN A1 - Scheppach, Markus W. A1 - Weber Nunes, Danilo A1 - Arizi, X. A1 - Rauber, David A1 - Probst, Andreas A1 - Nagl, Sandra A1 - Römmele, Christoph A1 - Palm, Christoph A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - Single frame workflow recognition during endoscopic submucosal dissection (ESD) using artificial intelligence (AI) T2 - Endoscopy N2 - Aims  Precise surgical phase recognition and evaluation may improve our understanding of complex endoscopic procedures. Furthermore, quality control measurements and endoscopy training could benefit from objective descriptions of surgical phase distributions. Therefore, we aimed to develop an artificial intelligence algorithm for frame-by-frame operational phase recognition during endoscopic submucosal dissection (ESD). Methods  Full length ESD-videos from 31 patients comprising 6.297.782 single images were collected retrospectively. Videos were annotated on a frame-by-frame basis for the operational macro-phases diagnostics, marking, injection, dissection and bleeding. Further subphases were the application of electrical current, visible injection of fluid into the submucosal space and scope manipulation, leading to 11 phases in total. 4.975.699 frames (21 patients) were used for training of a video swin transformer using uniform frame sampling for temporal information. Hyperparameter tuning was performed with 897.325 further frames (6 patients), while 424.758 frames (4 patients) were used for validation. Results  The overall F1 scores on the test dataset for the macro-phases and all 11 phases were 0.96 and 0.90, respectively. The recall values for diagnostics, marking, injection, dissection and bleeding were 1.00, 1.00, 0.95, 0.96 and 0.93, respectively. Conclusions  The algorithm classified operational phases during ESD with high accuracy. A precise evaluation of phase distribution may allow for the development of objective quality metrics for quality control and training. Y1 - 2025 U6 - https://doi.org/10.1055/s-0045-1806324 VL - 57 IS - S 02 SP - S511 PB - Thieme CY - Stuttgart ER - TY - CHAP A1 - Nunes, Danilo Weber A1 - Hammer, Michael A1 - Hammer, Simone A1 - Uller, Wibke A1 - Palm, Christoph T1 - Classification of Vascular Malformations Based on T2 STIR Magnetic Resonance Imaging T2 - Bildverarbeitung für die Medizin 2022: Proceedings, German Workshop on Medical Image Computing, Heidelberg, June 26-28, 2022 N2 - Vascular malformations (VMs) are a rare condition. They can be categorized into high-flow and low-flow VMs, which is a challenging task for radiologists. In this work, a very heterogeneous set of MRI images with only rough annotations are used for classification with a convolutional neural network. The main focus is to describe the challenging data set and strategies to deal with such data in terms of preprocessing, annotation usage and choice of the network architecture. We achieved a classification result of 89.47 % F1-score with a 3D ResNet 18. KW - Deep Learning KW - Magnetic Resonance Imaging KW - Vascular Malformations Y1 - 2022 U6 - https://doi.org/10.1007/978-3-658-36932-3_57 SP - 267 EP - 272 PB - Springer Vieweg CY - Wiesbaden ER - TY - JOUR A1 - Maerkl, Raphaela A1 - Rueckert, Tobias A1 - Rauber, David A1 - Gutbrod, Max A1 - Weber Nunes, Danilo A1 - Palm, Christoph T1 - Enhancing generalization in zero-shot multi-label endoscopic instrument classification JF - International Journal of Computer Assisted Radiology and Surgery N2 - Purpose Recognizing previously unseen classes with neural networks is a significant challenge due to their limited generalization capabilities. This issue is particularly critical in safety-critical domains such as medical applications, where accurate classification is essential for reliability and patient safety. Zero-shot learning methods address this challenge by utilizing additional semantic data, with their performance relying heavily on the quality of the generated embeddings. Methods This work investigates the use of full descriptive sentences, generated by a Sentence-BERT model, as class representations, compared to simpler category-based word embeddings derived from a BERT model. Additionally, the impact of z-score normalization as a post-processing step on these embeddings is explored. The proposed approach is evaluated on a multi-label generalized zero-shot learning task, focusing on the recognition of surgical instruments in endoscopic images from minimally invasive cholecystectomies. Results The results demonstrate that combining sentence embeddings and z-score normalization significantly improves model performance. For unseen classes, the AUROC improves from 43.9% to 64.9%, and the multi-label accuracy from 26.1% to 79.5%. Overall performance measured across both seen and unseen classes improves from 49.3% to 64.9% in AUROC and from 37.3% to 65.1% in multi-label accuracy, highlighting the effectiveness of our approach. Conclusion These findings demonstrate that sentence embeddings and z-score normalization can substantially enhance the generalization performance of zero-shot learning models. However, as the study is based on a single dataset, future work should validate the method across diverse datasets and application domains to establish its robustness and broader applicability. KW - Generalized zero-shot learning KW - Sentence embeddings KW - Z-score normalization KW - Multi-label classification KW - Surgical instruments Y1 - 2025 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:898-opus4-85674 N1 - Corresponding author der OTH Regensburg: Raphaela Maerkl VL - 20 SP - 1577 EP - 1587 PB - Springer Nature ER - TY - GEN A1 - Scheppach, Markus W. A1 - Weber Nunes, Danilo A1 - Rauber, David A1 - Arizi, X. A1 - Probst, Andreas A1 - Nagl, Sandra A1 - Römmele, Christoph A1 - Ebigbo, Alanna A1 - Palm, Christoph A1 - Messmann, Helmut T1 - Künstliche Intelligenz-basierte Erkennung von interventionellen Phasen bei der endoskopischen Submukosadissektion T2 - Zeitschrift für Gastroenterologie N2 - Einleitung: Die endoskopische Submukosadissektion (ESD) ist ein komplexes endoskopisches Verfahren, das technische Expertise erfordert. Objektive Methoden zur Analyse von interventionellen Abläufen bei ESD könnten für Qualitätssicherung und Ausbildung, wie auch eine automatische Befunderstellung von Nutzen sein. Ziele: In dieser Studie wurde ein KI-Algorithmus für die Erkennung und Klassifizierung der interventionellen Phasen der ESD entwickelt, um die technische Basis für eine standardisierte Leistungsbewertung und automatische Befunderstellung zu schaffen. Methodik: Vollständige ESD-Videoaufnahmen von 49 Patienten wurden retrospektiv zusammengestellt. Der Datensatz umfasste 6.390.151 Einzelbilder, die alle für die folgenden interventionellen Phasen annotiert wurden: Diagnostik, Markierung, Injektion, Dissektion und Hämostase. 3.973.712 Bilder (28 Patienten) wurden für das Training eines Video-Swin-Transformers genutzt. Dabei wurde temporale Information durch standardisierte BIldextraktion in festgelegten zeitlichen Abständen zum analysierten Bild inkorporiert. 2.416.439 separate Bilder (21 Patienten) wurden für eine interne Validierung genutzt. Ergebnis: Bei der internen Evaluation erreichte das System insgesamt einen F1-Wert von 0,88. Es wurden F1-Werte von 0,99, 0,89, 0,89, 0,91 und 0,52 für Diagnostik, Markierung, Injektion, Dissektion bzw. Blutungsmanagement gemessen. Die Sensitivitäten für dieselben Parameter betrugen 1,00, 0,80, 0,94, 0,89 und 0,67, die Spezifitäten lagen bei 1,00, 1,00, 0,98, 0,88 und 0,93. Positive prädiktive Werte wurden mit 0,98, 1,00, 0,85, 0,94 und 0,43 gemessen. Schlussfolgerung: In dieser vorläufigen Studie zeigte ein KI-Algorithmus eine hohe Leistungsfähigkeit für die Einzelbild-Erkennung von Verfahrensphasen während der ESD. Die vergleichsweise niedrige Leistung für die Blutungsphase wurde auf das seltene Auftreten von Blutungsepisoden im Trainingsdatensatz zurückgeführt, der zu diesem Zeitpunkt nur Videos in voller Länge umfasste. Die zukünftige Entwicklung des Algorithmus wird sich auf die Reduzierung von Klassenungleichgewichten durch selektive Annotationsprotokolle konzentrieren. Y1 - 2025 U6 - https://doi.org/10.1055/s-0045-1811093 VL - 63 IS - 08 SP - e612 EP - e613 PB - Thieme CY - Stuttgart ER - TY - GEN A1 - Scheppach, Markus W. A1 - Rauber, David A1 - Zingler, C. A1 - Weber Nunes, Danilo A1 - Probst, Andreas A1 - Römmele, Christoph A1 - Nagl, Sandra A1 - Ebigbo, Alanna A1 - Palm, Christoph A1 - Messmann, Helmut T1 - Instrumentenerkennung während der endoskopischen Submukosadissektion mittels künstlicher Intelligenz T2 - Zeitschrift für Gastroenterologie N2 - Einleitung: Die endoskopische Submukosadissektion (ESD) ist eine komplexe Technik zur Resektion gastrointestinaler Frühneoplasien. Dabei werden für die verschiedenen Schritte der Intervention spezifische endoskopische Instrumente verwendet. Die präzise und automatische Erkennung und Abgrenzung der verwendeten Instrumente (Injektionsnadeln, elektrochirurgische Messer mit unterschiedlichen Konfigurationen, hämostatische Zangen) könnte wertvolle Informationen über den Fortschritt und die Verfahrensmerkmale der ESD liefern und eine automatische standardisierte Berichterstattung ermöglichen. Ziele: Ziel dieser Studie war die Entwicklung eines KI-Algorithmus zur Erkennung und Delineation von endoskopischen Instrumenten bei der ESD. Methodik: 17 ESD-Videos (9×rektal, 5×ösophageal, 3×gastrisch) wurden retrospektiv zusammengestellt. Auf 8530 Einzelbilder dieser Videos wurden durch 2 Studienmitarbeiter die folgenden Klassen eingezeichnet: Hakenmesser – Spitze, Hakenmesser – Katheter, Nadelmesser – Spitze und – Katheter, Injektionsnadel -Spitze und – Katheter sowie hämostatische Zange – Spitze und – Katheter. Der annotierte Datensatz wurde zum Training eines DeepLabV3+-Deep-Learning-Algorithmus mit ConvNeXt-Backbone zur Erkennung und Abgrenzung der genannten Klassen verwendet. Die Evaluation erfolgte durch 5-fache interne Kreuzvalidierung. Ergebnis: Die Validierung auf Einzelpixelbasis ergab insgesamt einen F1-Score von 0,80, eine Sensitivität von 0,81 und eine Spezifität von 1,00. Es wurden F1-Scores von 1,00, 0,97, 0,80, 0,98, 0,85, 0,97, 0,80, 0,51 bzw. 0,85 für die Klassen Hakenmesser – Katheter und – Spitze, Nadelmesser – Katheter und – Spitze, Injektionsnadel – Katheter und – Spitze, hämostatische Zange – Katheter und – Spitze gemessen. Schlussfolgerung: In dieser Studie wurden die wichtigsten endoskopischen Instrumente, die während der ESD verwendet werden, mit hoher Genauigkeit erkannt. Die geringere Leistung bei der hämostatische Zange – Katheter kann auf die Unterrepräsentation dieser Klassen in den Trainingsdaten zurückgeführt werden. Zukünftige Studien werden sich auf die Erweiterung der Instrumentenklassen sowie auf die Ausbalancierung der Trainingsdaten konzentrieren. Y1 - 2025 U6 - https://doi.org/10.1055/s-0045-1811092 VL - 63 IS - 8 PB - Thieme ER - TY - INPR A1 - Gutbrod, Max A1 - Rauber, David A1 - Weber Nunes, Danilo A1 - Palm, Christoph T1 - OpenMIBOOD: Open Medical Imaging Benchmarks for Out-Of-Distribution Detection N2 - The growing reliance on Artificial Intelligence (AI) in critical domains such as healthcare demands robust mechanisms to ensure the trustworthiness of these systems, especially when faced with unexpected or anomalous inputs. This paper introduces the Open Medical Imaging Benchmarks for Out-Of-Distribution Detection (OpenMIBOOD), a comprehensive framework for evaluating out-of-distribution (OOD) detection methods specifically in medical imaging contexts. OpenMIBOOD includes three benchmarks from diverse medical domains, encompassing 14 datasets divided into covariate-shifted in-distribution, near-OOD, and far-OOD categories. We evaluate 24 post-hoc methods across these benchmarks, providing a standardized reference to advance the development and fair comparison of OOD detection methods. Results reveal that findings from broad-scale OOD benchmarks in natural image domains do not translate to medical applications, underscoring the critical need for such benchmarks in the medical field. By mitigating the risk of exposing AI models to inputs outside their training distribution, OpenMIBOOD aims to support the advancement of reliable and trustworthy AI systems in healthcare. The repository is available at this https URL. Y1 - 2025 U6 - https://doi.org/10.48550/arXiv.2503.16247 N1 - Der Aufsatz wurde peer-reviewed veröffentlicht und ist ebenfalls in diesem Repositorium verzeichnet unter: https://opus4.kobv.de/opus4-oth-regensburg/8467 ER - TY - CHAP A1 - Weber Nunes, Danilo A1 - Rauber, David A1 - Palm, Christoph ED - Palm, Christoph ED - Breininger, Katharina ED - Deserno, Thomas M. ED - Handels, Heinz ED - Maier, Andreas ED - Maier-Hein, Klaus H. ED - Tolxdorff, Thomas T1 - Self-supervised 3D Vision Transformer Pre-training for Robust Brain Tumor Classification T2 - Bildverarbeitung für die Medizin 2025: Proceedings, German Conference on Medical Image Computing, Regensburg March 09-11, 2025 N2 - Brain tumors pose significant challenges in neurology, making precise classification crucial for prognosis and treatment planning. This work investigates the effectiveness of a self-supervised learning approach–masked autoencoding (MAE)–to pre-train a vision transformer (ViT) model for brain tumor classification. Our method uses non-domain specific data, leveraging the ADNI and OASIS-3 MRI datasets, which primarily focus on degenerative diseases, for pretraining. The model is subsequently fine-tuned and evaluated on the BraTS glioma and meningioma datasets, representing a novel use of these datasets for tumor classification. The pre-trained MAE ViT model achieves an average F1 score of 0.91 in a 5-fold cross-validation setting, outperforming the nnU-Net encoder trained from scratch, particularly under limited data conditions. These findings highlight the potential of self-supervised MAE in enhancing brain tumor classification accuracy, even with restricted labeled data. Y1 - 2025 U6 - https://doi.org/10.1007/978-3-658-47422-5_69 SP - 298 EP - 303 PB - Springer Vieweg CY - Wiesbaden ER - TY - CHAP A1 - Gutbrod, Max A1 - Rauber, David A1 - Weber Nunes, Danilo A1 - Palm, Christoph T1 - OpenMIBOOD: Open Medical Imaging Benchmarks for Out-Of-Distribution Detection T2 - 2025 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 10.-17. June 2025, Nashville N2 - The growing reliance on Artificial Intelligence (AI) in critical domains such as healthcare demands robust mechanisms to ensure the trustworthiness of these systems, especially when faced with unexpected or anomalous inputs. This paper introduces the Open Medical Imaging Benchmarks for Out-Of-Distribution Detection (OpenMIBOOD), a comprehensive framework for evaluating out-of-distribution (OOD) detection methods specifically in medical imaging contexts. OpenMIBOOD includes three benchmarks from diverse medical domains, encompassing 14 datasets divided into covariate-shifted in-distribution, nearOOD, and far-OOD categories. We evaluate 24 post-hoc methods across these benchmarks, providing a standardized reference to advance the development and fair comparison of OODdetection methods. Results reveal that findings from broad-scale OOD benchmarks in natural image domains do not translate to medical applications, underscoring the critical need for such benchmarks in the medical field. By mitigating the risk of exposing AI models to inputs outside their training distribution, OpenMIBOOD aims to support the advancement of reliable and trustworthy AI systems in healthcare. The repository is available at https://github.com/remic-othr/OpenMIBOOD. KW - Benchmark testing KW - Reliability KW - Trustworthiness KW - out-of-distribution Y1 - 2025 UR - https://openaccess.thecvf.com/content/CVPR2025/html/Gutbrod_OpenMIBOOD_Open_Medical_Imaging_Benchmarks_for_Out-Of-Distribution_Detection_CVPR_2025_paper.html SN - 979-8-3315-4364-8 U6 - https://doi.org/10.1109/CVPR52734.2025.02410 N1 - Die Preprint-Version ist ebenfalls in diesem Repositorium verzeichnet unter: https://opus4.kobv.de/opus4-oth-regensburg/8059 SP - 25874 EP - 25886 PB - IEEE ER - TY - INPR A1 - Rückert, Tobias A1 - Rauber, David A1 - Maerkl, Raphaela A1 - Klausmann, Leonard A1 - Yildiran, Suemeyye R. A1 - Gutbrod, Max A1 - Nunes, Danilo Weber A1 - Moreno, Alvaro Fernandez A1 - Luengo, Imanol A1 - Stoyanov, Danail A1 - Toussaint, Nicolas A1 - Cho, Enki A1 - Kim, Hyeon Bae A1 - Choo, Oh Sung A1 - Kim, Ka Young A1 - Kim, Seong Tae A1 - Arantes, Gonçalo A1 - Song, Kehan A1 - Zhu, Jianjun A1 - Xiong, Junchen A1 - Lin, Tingyi A1 - Kikuchi, Shunsuke A1 - Matsuzaki, Hiroki A1 - Kouno, Atsushi A1 - Manesco, João Renato Ribeiro A1 - Papa, João Paulo A1 - Choi, Tae-Min A1 - Jeong, Tae Kyeong A1 - Park, Juyoun A1 - Alabi, Oluwatosin A1 - Wei, Meng A1 - Vercauteren, Tom A1 - Wu, Runzhi A1 - Xu, Mengya A1 - an Wang, A1 - Bai, Long A1 - Ren, Hongliang A1 - Yamlahi, Amine A1 - Hennighausen, Jakob A1 - Maier-Hein, Lena A1 - Kondo, Satoshi A1 - Kasai, Satoshi A1 - Hirasawa, Kousuke A1 - Yang, Shu A1 - Wang, Yihui A1 - Chen, Hao A1 - Rodríguez, Santiago A1 - Aparicio, Nicolás A1 - Manrique, Leonardo A1 - Lyons, Juan Camilo A1 - Hosie, Olivia A1 - Ayobi, Nicolás A1 - Arbeláez, Pablo A1 - Li, Yiping A1 - Khalil, Yasmina Al A1 - Nasirihaghighi, Sahar A1 - Speidel, Stefanie A1 - Rückert, Daniel A1 - Feussner, Hubertus A1 - Wilhelm, Dirk A1 - Palm, Christoph T1 - Comparative validation of surgical phase recognition, instrument keypoint estimation, and instrument instance segmentation in endoscopy: Results of the PhaKIR 2024 challenge N2 - Reliable recognition and localization of surgical instruments in endoscopic video recordings are foundational for a wide range of applications in computer- and robot-assisted minimally invasive surgery (RAMIS), including surgical training, skill assessment, and autonomous assistance. However, robust performance under real-world conditions remains a significant challenge. Incorporating surgical context - such as the current procedural phase - has emerged as a promising strategy to improve robustness and interpretability. To address these challenges, we organized the Surgical Procedure Phase, Keypoint, and Instrument Recognition (PhaKIR) sub-challenge as part of the Endoscopic Vision (EndoVis) challenge at MICCAI 2024. We introduced a novel, multi-center dataset comprising thirteen full-length laparoscopic cholecystectomy videos collected from three distinct medical institutions, with unified annotations for three interrelated tasks: surgical phase recognition, instrument keypoint estimation, and instrument instance segmentation. Unlike existing datasets, ours enables joint investigation of instrument localization and procedural context within the same data while supporting the integration of temporal information across entire procedures. We report results and findings in accordance with the BIAS guidelines for biomedical image analysis challenges. The PhaKIR sub-challenge advances the field by providing a unique benchmark for developing temporally aware, context-driven methods in RAMIS and offers a high-quality resource to support future research in surgical scene understanding. Y1 - 2025 N1 - Der Aufsatz wurde peer-reviewed veröffentlicht und ist ebenfalls in diesem Repositorium verzeichnet unter: https://opus4.kobv.de/opus4-oth-regensburg/frontdoor/index/index/start/0/rows/10/sortfield/score/sortorder/desc/searchtype/simple/query/10.1016%2Fj.media.2026.103945/docId/8846 ER - TY - JOUR A1 - Hammer, Simone A1 - Nunes, Danilo Weber A1 - Hammer, Michael A1 - Zeman, Florian A1 - Akers, Michael A1 - Götz, Andrea A1 - Balla, Annika A1 - Doppler, Michael Christian A1 - Fellner, Claudia A1 - Da Platz Batista Silva, Natascha A1 - Thurn, Sylvia A1 - Verloh, Niklas A1 - Stroszczynski, Christian A1 - Wohlgemuth, Walter Alexander A1 - Palm, Christoph A1 - Uller, Wibke T1 - Deep learning-based differentiation of peripheral high-flow and low-flow vascular malformations in T2-weighted short tau inversion recovery MRI JF - Clinical hemorheology and microcirculation N2 - BACKGROUND Differentiation of high-flow from low-flow vascular malformations (VMs) is crucial for therapeutic management of this orphan disease. OBJECTIVE A convolutional neural network (CNN) was evaluated for differentiation of peripheral vascular malformations (VMs) on T2-weighted short tau inversion recovery (STIR) MRI. METHODS 527 MRIs (386 low-flow and 141 high-flow VMs) were randomly divided into training, validation and test set for this single-center study. 1) Results of the CNN's diagnostic performance were compared with that of two expert and four junior radiologists. 2) The influence of CNN's prediction on the radiologists' performance and diagnostic certainty was evaluated. 3) Junior radiologists' performance after self-training was compared with that of the CNN. RESULTS Compared with the expert radiologists the CNN achieved similar accuracy (92% vs. 97%, p = 0.11), sensitivity (80% vs. 93%, p = 0.16) and specificity (97% vs. 100%, p = 0.50). In comparison to the junior radiologists, the CNN had a higher specificity and accuracy (97% vs. 80%, p <  0.001; 92% vs. 77%, p <  0.001). CNN assistance had no significant influence on their diagnostic performance and certainty. After self-training, the junior radiologists' specificity and accuracy improved and were comparable to that of the CNN. CONCLUSIONS Diagnostic performance of the CNN for differentiating high-flow from low-flow VM was comparable to that of expert radiologists. CNN did not significantly improve the simulated daily practice of junior radiologists, self-training was more effective. KW - magnetic resonance imaging KW - deep learning KW - Vascular malformation Y1 - 2024 U6 - https://doi.org/10.3233/CH-232071 SP - 1 EP - 15 PB - IOP Press ET - Pre-press ER - TY - GEN A1 - Scheppach, Markus W. A1 - Nunes, Danilo Weber A1 - Arizi, X. A1 - Rauber, David A1 - Probst, Andreas A1 - Nagl, Sandra A1 - Römmele, Christoph A1 - Palm, Christoph A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - Intraoperative Phasenerkennung bei endoskopischer Submukosadissektion mit Hilfe von künstlicher Intelligenz T2 - Zeitschrift für Gastroenterologie N2 - Einleitung:  Künstliche Intelligenz (KI) wird in der Endoskopie des Gastrointestinaltraktes zur Erkennung und Charakterisierung von Kolonpolypen eingesetzt. Die Rolle von KI bei therapeutischen Maßnahmen wurde noch nicht eingehend untersucht. Eine intraprozedurale Phasenerkennung bei endoskopischer Submukoasdissektion (ESD) könnte die Erhebung von Qualitätsindikatoren ermöglichen. Weiterhin könnte diese Technologie zu einem tieferen Verständnis über die Eigenschaften der Prozedur führen und weiterführende Applikationen zur automatischen Dokumentation oder standardisiertem Training vorbereiten. Ziele: Ziel dieser Studie war die Entwicklung eines KI Algorithmus zur intraprozeduralen Phasenerkennung bei endoskopischer Submukosadissektion. Methodik:  2071546 Einzelbilder aus 27 ESD Videos in voller Länge wurden für die übergeordneten Klassen Diagnostik, Markierung, Nadelinjektion, Dissektion und Blutung, sowie die untergeordneten Klassen Endoskop-Manipulation, Injektion und Applikation von elektrischem Strom annotiert. Mit einem Trainingsdatensatz (898440 Einzelbilder, 17 ESDs) wurde ein Video Swin Transformer mit uniformer Stichprobenentnahme trainiert und intern validiert (769523 Einzelbilder, 6 ESDs). Neben der internen Validierung wurde der Algorithmus anhand von einem separaten Testdatensatz (403583 Einzelbilder, 4 ESDs) evaluiert. Ergebnis:  Der F1 Score des Algorithmus für alle Klassen lag in der internen Validierung bei 83%, in dem separaten Test bei 90%. Anhand des separaten Tests wurden true positive (TP)-Raten für Diagnostik, Markierung, Nadelinjektion, Dissektion und Blutung von 100%, 100%, 96%, 97% und 93% ermittelt. Für Endoskopmanipulation, Injektion und Applikation von Elektrizität lagen die TP-Raten bei 92%, 98% und 91%. Schlussfolgerung:  Der entwickelte Algorithmus klassifizierte ESD Videos in voller Länge und anhand jedes einzelnen Bildes mit hoher Genauigkeit. Zukünftige Forschungsvorhaben könnten intraoperative Qualitätsindikatioren auf Basis dieser Informationen entwickeln und eine automatisierte Dokumentation ermöglichen. Y1 - 2024 U6 - https://doi.org/10.1055/s-0044-1790084 VL - 62 IS - 09 SP - e828 PB - Georg Thieme Verlag KG ER - TY - JOUR A1 - Rueckert, Tobias A1 - Rauber, David A1 - Maerkl, Raphaela A1 - Klausmann, Leonard A1 - Yildiran, Suemeyye R. A1 - Gutbrod, Max A1 - Nunes, Danilo Weber A1 - Moreno, Alvaro Fernandez A1 - Luengo, Imanol A1 - Stoyanov, Danail A1 - Toussaint, Nicolas A1 - Cho, Enki A1 - Kim, Hyeon Bae A1 - Choo, Oh Sung A1 - Kim, Ka Young A1 - Kim, Seong Tae A1 - Arantes, Gonçalo A1 - Song, Kehan A1 - Zhu, Jianjun A1 - Xiong, Junchen A1 - Lin, Tingyi A1 - Kikuchi, Shunsuke A1 - Matsuzaki, Hiroki A1 - Kouno, Atsushi A1 - Manesco, João Renato Ribeiro A1 - Papa, João Paulo A1 - Choi, Tae-Min A1 - Jeong, Tae Kyeong A1 - Park, Juyoun A1 - Alabi, Oluwatosin A1 - Wei, Meng A1 - Vercauteren, Tom A1 - Wu, Runzhi A1 - Xu, Mengya A1 - Wang, An A1 - Bai, Long A1 - Ren, Hongliang A1 - Yamlahi, Amine A1 - Hennighausen, Jakob A1 - Maier-Hein, Lena A1 - Kondo, Satoshi A1 - Kasai, Satoshi A1 - Hirasawa, Kousuke A1 - Yang, Shu A1 - Wang, Yihui A1 - Chen, Hao A1 - Rodríguez, Santiago A1 - Aparicio, Nicolás A1 - Manrique, Leonardo A1 - Palm, Christoph A1 - Wilhelm, Dirk A1 - Feussner, Hubertus A1 - Rueckert, Daniel A1 - Speidel, Stefanie A1 - Nasirihaghighi, Sahar A1 - Al Khalil, Yasmina A1 - Li, Yiping A1 - Arbeláez, Pablo A1 - Ayobi, Nicolás A1 - Hosie, Olivia A1 - Lyons, Juan Camilo T1 - Comparative validation of surgical phase recognition, instrument keypoint estimation, and instrument instance segmentation in endoscopy: Results of the PhaKIR 2024 challenge JF - Medical Image Analysis N2 - Reliable recognition and localization of surgical instruments in endoscopic video recordings are foundational for a wide range of applications in computer- and robot-assisted minimally invasive surgery (RAMIS), including surgical training, skill assessment, and autonomous assistance. However, robust performance under real-world conditions remains a significant challenge. Incorporating surgical context – such as the current procedural phase – has emerged as a promising strategy to improve robustness and interpretability. To address these challenges, we organized the Surgical Procedure Phase, Keypoint, and Instrument Recognition (PhaKIR) sub-challenge as part of the Endoscopic Vision (EndoVis) challenge at MICCAI 2024. We introduced a novel, multi-center dataset comprising thirteen full-length laparoscopic cholecystectomy videos collected from three distinct medical institutions, with unified annotations for three interrelated tasks: surgical phase recognition, instrument keypoint estimation, and instrument instance segmentation. Unlike existing datasets, ours enables joint investigation of instrument localization and procedural context within the same data while supporting the integration of temporal information across entire procedures. We report results and findings in accordance with the BIAS guidelines for biomedical image analysis challenges. The PhaKIR sub-challenge advances the field by providing a unique benchmark for developing temporally aware, context-driven methods in RAMIS and offers a high-quality resource to support future research in surgical scene understanding. KW - Surgical phase recognition KW - Instrument keypoint estimation KW - Instrument instance segmentation KW - Robot-assisted surgery Y1 - 2026 U6 - https://doi.org/10.1016/j.media.2026.103945 SN - 1361-8415 N1 - Corresponding author der OTH Regensburg: Tobias Rueckert Die Preprint-Version ist ebenfalls in diesem Repositorium verzeichnet unter: https://opus4.kobv.de/opus4-oth-regensburg/solrsearch/index/search/start/0/rows/10/sortfield/score/sortorder/desc/searchtype/simple/query/2507.16559 VL - 109 PB - Elsevier ER - TY - GEN A1 - Gutbrod, Max A1 - Rauber, David A1 - Weber Nunes, Danilo A1 - Palm, Christoph T1 - A cleaned subset of the first five CATARACTS test videos [Data set] N2 - This dataset is a subset of the original CATARACTS test dataset and is used by the OpenMIBOOD framework to evaluate a specific out-of-distribution setting. When using this dataset, it is mandatory to cite the corresponding publication (OpenMIBOOD (10.1109/CVPR52734.2025.02410)) and follow the acknowledgement and citation requirements of the original dataset (CATARACTS). The original CATARACTS dataset (associated publication,Homepage) consists of 50 videos of cataract surgeries, split into 25 train and 25 test videos. This subset contains the frames of the first 5 test videos. Further, black frames at the beginning of each video were removed. Y1 - 2025 U6 - https://doi.org/10.5281/zenodo.14924735 N1 - Related works: Is derived from: Dataset: 10.21227/ac97-8m18 (DOI) Software: Repository URL: https://github.com/remic-othr/OpenMIBOOD ER -