TY - GEN A1 - Mendel, Robert A1 - Souza Jr., Luis Antonio de A1 - Rauber, David A1 - Papa, João Paulo A1 - Palm, Christoph T1 - Abstract: Semi-supervised Segmentation Based on Error-correcting Supervision T2 - Bildverarbeitung für die Medizin 2021. Proceedings, German Workshop on Medical Image Computing, Regensburg, March 7-9, 2021 N2 - Pixel-level classification is an essential part of computer vision. For learning from labeled data, many powerful deep learning models have been developed recently. In this work, we augment such supervised segmentation models by allowing them to learn from unlabeled data. Our semi-supervised approach, termed Error-Correcting Supervision, leverages a collaborative strategy. Apart from the supervised training on the labeled data, the segmentation network is judged by an additional network. KW - Deep Learning Y1 - 2021 SN - 978-3-658-33197-9 U6 - https://doi.org/10.1007/978-3-658-33198-6_43 SP - 178 PB - Springer Vieweg CY - Wiesbaden ER - TY - JOUR A1 - Ebigbo, Alanna A1 - Palm, Christoph A1 - Messmann, Helmut T1 - Barrett esophagus: What to expect from Artificial Intelligence? JF - Best Practice & Research Clinical Gastroenterology N2 - The evaluation and assessment of Barrett’s esophagus is challenging for both expert and nonexpert endoscopists. However, the early diagnosis of cancer in Barrett’s esophagus is crucial for its prognosis, and could save costs. Pre-clinical and clinical studies on the application of Artificial Intelligence (AI) in Barrett’s esophagus have shown promising results. In this review, we focus on the current challenges and future perspectives of implementing AI systems in the management of patients with Barrett’s esophagus. KW - Deep Learning KW - Künstliche Intelligenz KW - Computerunterstützte Medizin KW - Barrett KW - Adenocarcinoma KW - Artificial intelligence KW - Deep learning KW - Convolutional neural networks Y1 - 2021 U6 - https://doi.org/10.1016/j.bpg.2021.101726 SN - 1521-6918 VL - 52-53 IS - June-August PB - Elsevier ER - TY - JOUR A1 - Maier, Andreas A1 - Deserno, Thomas M. A1 - Handels, Heinz A1 - Maier-Hein, Klaus H. A1 - Palm, Christoph A1 - Tolxdorff, Thomas T1 - Guest editorial of the IJCARS - BVM 2018 special issue JF - International Journal of Computer Assisted Radiology and Surgery KW - Medical Image Computing Y1 - 2019 U6 - https://doi.org/10.1007/s11548-018-01902-0 VL - 14 SP - 1 EP - 2 PB - Springer ER - TY - JOUR A1 - Passos, Leandro A. A1 - Souza Jr., Luis Antonio de A1 - Mendel, Robert A1 - Ebigbo, Alanna A1 - Probst, Andreas A1 - Messmann, Helmut A1 - Palm, Christoph A1 - Papa, João Paulo T1 - Barrett's esophagus analysis using infinity Restricted Boltzmann Machines JF - Journal of Visual Communication and Image Representation N2 - The number of patients with Barret’s esophagus (BE) has increased in the last decades. Considering the dangerousness of the disease and its evolution to adenocarcinoma, an early diagnosis of BE may provide a high probability of cancer remission. However, limitations regarding traditional methods of detection and management of BE demand alternative solutions. As such, computer-aided tools have been recently used to assist in this problem, but the challenge still persists. To manage the problem, we introduce the infinity Restricted Boltzmann Machines (iRBMs) to the task of automatic identification of Barrett’s esophagus from endoscopic images of the lower esophagus. Moreover, since iRBM requires a proper selection of its meta-parameters, we also present a discriminative iRBM fine-tuning using six meta-heuristic optimization techniques. We showed that iRBMs are suitable for the context since it provides competitive results, as well as the meta-heuristic techniques showed to be appropriate for such task. KW - Speiseröhrenkrankheit KW - Diagnose KW - Boltzmann-Maschine KW - Barrett’s esophagus KW - Infinity Restricted Boltzmann Machines KW - Meta-heuristics KW - Deep learning KW - Metaheuristik KW - Maschinelles Lernen Y1 - 2019 U6 - https://doi.org/10.1016/j.jvcir.2019.01.043 VL - 59 SP - 475 EP - 485 PB - Elsevier ER - TY - CHAP A1 - Souza Jr., Luis Antonio de A1 - Afonso, Luis Claudio Sugi A1 - Palm, Christoph A1 - Papa, João Paulo T1 - Barrett's Esophagus Identification Using Optimum-Path Forest T2 - Proceedings of the 30th Conference on Graphics, Patterns and Images Tutorials (SIBGRAPI-T 2017), Niterói, Rio de Janeiro, Brazil, 2017, 17-20 October N2 - Computer-assisted analysis of endoscopic images can be helpful to the automatic diagnosis and classification of neoplastic lesions. Barrett's esophagus (BE) is a common type of reflux that is not straight forward to be detected by endoscopic surveillance, thus being way susceptible to erroneous diagnosis, which can cause cancer when not treated properly. In this work, we introduce the Optimum-Path Forest (OPF) classifier to the task of automatic identification of Barrett'sesophagus, with promising results and outperforming the well known Support Vector Machines (SVM) in the aforementioned context. We consider describing endoscopic images by means of feature extractors based on key point information, such as the Speeded up Robust Features (SURF) and Scale-Invariant Feature Transform (SIFT), for further designing a bag-of-visual-wordsthat is used to feed both OPF and SVM classifiers. The best results were obtained by means of the OPF classifier for both feature extractors, with values lying on 0.732 (SURF) - 0.735(SIFT) for sensitivity, 0.782 (SURF) - 0.806 (SIFT) for specificity, and 0.738 (SURF) - 0.732 (SIFT) for the accuracy. KW - Speiseröhrenkrankheit KW - Diagnose KW - Maschinelles Lernen KW - Bilderkennung KW - Automatische Klassifikation Y1 - 2017 U6 - https://doi.org/10.1109/SIBGRAPI.2017.47 SP - 308 EP - 314 ER - TY - CHAP A1 - Zehner, Alexander A1 - Szalo, Alexander Eduard A1 - Palm, Christoph T1 - GraphMIC: Easy Prototyping of Medical Image Computing Applications T2 - Interactive Medical Image Computing (IMIC), Workshop at the Medical Image Computing and Computer Assisted Interventions (MICCAI 2015), 2015, Munich N2 - GraphMIC is a cross-platform image processing application utilizing the libraries ITK and OpenCV. The abstract structure of image processing pipelines is visually represented by user interface components based on modern QtQuick technology and allows users to focus on arrangement and parameterization of operations rather than implementing the equivalent functionality natively in C++. The application's central goal is to improve and simplify the typical workflow by providing various high level features and functions like multi threading, image sequence processing and advanced error handling. A built-in python interpreter allows the creation of custom nodes, where user defined algorithms can be integrated to extend basic functionality. An embedded 2d/3d visual-izer gives feedback of the resulting image of an operation or the whole pipeline. User inputs like seed points, contours or regions are forwarded to the processing pipeline as parameters to offer semi-automatic image computing. We report the main concept of the application and introduce several features and their implementation. Finally, the current state of development as well as future perspectives of GraphMIC are discussed KW - Bildverarbeitung KW - Medizin Y1 - 2015 U6 - https://doi.org/10.13140/RG.2.1.3718.4725 N1 - Open-Access-Publikation SP - 395 EP - 400 ER - TY - CHAP A1 - Weber, Joachim A1 - Doenitz, Christian A1 - Brawanski, Alexander A1 - Palm, Christoph T1 - Data-Parallel MRI Brain Segmentation in Clinicial Use BT - Porting FSL-Fastv4 to GPGPUs T2 - Bildverarbeitung für die Medizin 2015; Algorithmen - Systeme - Anwendungen; Proceedings des Workshops vom 15. bis 17. März 2015 in Lübeck N2 - Structural MRI brain analysis and segmentation is a crucial part in the daily routine in neurosurgery for intervention planning. Exemplarily, the free software FSL-FAST (FMRIB’s Segmentation Library – FMRIB’s Automated Segmentation Tool) in version 4 is used for segmentation of brain tissue types. To speed up the segmentation procedure by parallel execution, we transferred FSL-FAST to a General Purpose Graphics Processing Unit (GPGPU) using Open Computing Language (OpenCL) [1]. The necessary steps for parallelization resulted in substantially different and less useful results. Therefore, the underlying methods were revised and adapted yielding computational overhead. Nevertheless, we achieved a speed-up factor of 3.59 from CPU to GPGPU execution, as well providing similar useful or even better results. KW - Brain Segmentation KW - Magnetic Resonance Imaging KW - Parallel Execution KW - Voxel Spacing KW - General Purpose Graphic Processing Unit KW - Kernspintomografie KW - Gehirn KW - Bildsegmentierung KW - Parallelverarbeitung Y1 - 2015 U6 - https://doi.org/10.1007/978-3-662-46224-9_67 SP - 389 EP - 394 PB - Springer CY - Berlin ER - TY - GEN A1 - Maier, Johannes A1 - Weiherer, Maximilian A1 - Huber, Michaela A1 - Palm, Christoph ED - Handels, Heinz ED - Deserno, Thomas M. ED - Maier, Andreas ED - Maier-Hein, Klaus H. ED - Palm, Christoph ED - Tolxdorff, Thomas T1 - Abstract: Imitating Human Soft Tissue with Dual-Material 3D Printing T2 - Bildverarbeitung für die Medizin 2019, Algorithmen - Systeme - Anwendungen. Proceedings des Workshops vom 17. bis 19. März 2019 in Lübeck N2 - Currently, it is common practice to use three-dimensional (3D) printers not only for rapid prototyping in the industry, but also in the medical area to create medical applications for training inexperienced surgeons. In a clinical training simulator for minimally invasive bone drilling to fix hand fractures with Kirschner-wires (K-wires), a 3D printed hand phantom must not only be geometrically but also haptically correct. Due to a limited view during an operation, surgeons need to perfectly localize underlying risk structures only by feeling of specific bony protrusions of the human hand. KW - Handchirurgie KW - 3D-Druck KW - Lernprogramm KW - HaptiVisT Y1 - 2019 SN - 978-3-658-25325-7 U6 - https://doi.org/10.1007/978-3-658-25326-4_48 SP - 218 PB - Springer Vieweg CY - Wiesbaden ER - TY - GEN A1 - Rückert, Tobias A1 - Rieder, Maximilian A1 - Rauber, David A1 - Xiao, Michel A1 - Humolli, Eg A1 - Feussner, Hubertus A1 - Wilhelm, Dirk A1 - Palm, Christoph T1 - Augmenting instrument segmentation in video sequences of minimally invasive surgery by synthetic smoky frames T2 - International Journal of Computer Assisted Radiology and Surgery KW - Surgical instrument segmentation KW - smoke simulation KW - unpaired image-to-image translation KW - robot-assisted surgery Y1 - 2023 U6 - https://doi.org/10.1007/s11548-023-02878-2 VL - 18 IS - Suppl 1 SP - S54 EP - S56 PB - Springer Nature ER - TY - JOUR A1 - Maerkl, Raphaela A1 - Rueckert, Tobias A1 - Rauber, David A1 - Gutbrod, Max A1 - Weber Nunes, Danilo A1 - Palm, Christoph T1 - Enhancing generalization in zero-shot multi-label endoscopic instrument classification JF - International Journal of Computer Assisted Radiology and Surgery N2 - Purpose Recognizing previously unseen classes with neural networks is a significant challenge due to their limited generalization capabilities. This issue is particularly critical in safety-critical domains such as medical applications, where accurate classification is essential for reliability and patient safety. Zero-shot learning methods address this challenge by utilizing additional semantic data, with their performance relying heavily on the quality of the generated embeddings. Methods This work investigates the use of full descriptive sentences, generated by a Sentence-BERT model, as class representations, compared to simpler category-based word embeddings derived from a BERT model. Additionally, the impact of z-score normalization as a post-processing step on these embeddings is explored. The proposed approach is evaluated on a multi-label generalized zero-shot learning task, focusing on the recognition of surgical instruments in endoscopic images from minimally invasive cholecystectomies. Results The results demonstrate that combining sentence embeddings and z-score normalization significantly improves model performance. For unseen classes, the AUROC improves from 43.9% to 64.9%, and the multi-label accuracy from 26.1% to 79.5%. Overall performance measured across both seen and unseen classes improves from 49.3% to 64.9% in AUROC and from 37.3% to 65.1% in multi-label accuracy, highlighting the effectiveness of our approach. Conclusion These findings demonstrate that sentence embeddings and z-score normalization can substantially enhance the generalization performance of zero-shot learning models. However, as the study is based on a single dataset, future work should validate the method across diverse datasets and application domains to establish its robustness and broader applicability. KW - Generalized zero-shot learning KW - Sentence embeddings KW - Z-score normalization KW - Multi-label classification KW - Surgical instruments Y1 - 2025 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:898-opus4-85674 N1 - Corresponding author der OTH Regensburg: Raphaela Maerkl VL - 20 SP - 1577 EP - 1587 PB - Springer Nature ER - TY - CHAP A1 - Klausmann, Leonard A1 - Rueckert, Tobias A1 - Rauber, David A1 - Maerkl, Raphaela A1 - Yildiran, Suemeyye R. A1 - Gutbrod, Max A1 - Palm, Christoph T1 - DIY challenge blueprint: from organization to technical realization in biomedical image analysis T2 - Medical Image Computing and Computer Assisted Intervention - MICCAI 2025 ; Proceedings Part XI N2 - Biomedical image analysis challenges have become the de facto standard for publishing new datasets and benchmarking different state-of-the-art algorithms. Most challenges use commercial cloud-based platforms, which can limit custom options and involve disadvantages such as reduced data control and increased costs for extended functionalities. In contrast, Do-It-Yourself (DIY) approaches have the capability to emphasize reliability, compliance, and custom features, providing a solid basis for low-cost, custom designs in self-hosted systems. Our approach emphasizes cost efficiency, improved data sovereignty, and strong compliance with regulatory frameworks, such as the GDPR. This paper presents a blueprint for DIY biomedical imaging challenges, designed to provide institutions with greater autonomy over their challenge infrastructure. Our approach comprehensively addresses both organizational and technical dimensions, including key user roles, data management strategies, and secure, efficient workflows. Key technical contributions include a modular, containerized infrastructure based on Docker, integration of open-source identity management, and automated solution evaluation workflows. Practical deployment guidelines are provided to facilitate implementation and operational stability. The feasibility and adaptability of the proposed framework are demonstrated through the MICCAI 2024 PhaKIR challenge with multiple international teams submitting and validating their solutions through our self-hosted platform. This work can be used as a baseline for future self-hosted DIY implementations and our results encourage further studies in the area of biomedical image analysis challenges. KW - Biomedical challenges KW - Image analysis KW - Blueprint KW - Do-It-Yourself KW - Self-hosting Y1 - 2025 SN - 978-3-032-05141-7 U6 - https://doi.org/10.1007/978-3-032-05141-7_9 SP - 85 EP - 95 PB - Springer CY - Cham ER - TY - INPR A1 - Gutbrod, Max A1 - Rauber, David A1 - Weber Nunes, Danilo A1 - Palm, Christoph T1 - OpenMIBOOD: Open Medical Imaging Benchmarks for Out-Of-Distribution Detection N2 - The growing reliance on Artificial Intelligence (AI) in critical domains such as healthcare demands robust mechanisms to ensure the trustworthiness of these systems, especially when faced with unexpected or anomalous inputs. This paper introduces the Open Medical Imaging Benchmarks for Out-Of-Distribution Detection (OpenMIBOOD), a comprehensive framework for evaluating out-of-distribution (OOD) detection methods specifically in medical imaging contexts. OpenMIBOOD includes three benchmarks from diverse medical domains, encompassing 14 datasets divided into covariate-shifted in-distribution, near-OOD, and far-OOD categories. We evaluate 24 post-hoc methods across these benchmarks, providing a standardized reference to advance the development and fair comparison of OOD detection methods. Results reveal that findings from broad-scale OOD benchmarks in natural image domains do not translate to medical applications, underscoring the critical need for such benchmarks in the medical field. By mitigating the risk of exposing AI models to inputs outside their training distribution, OpenMIBOOD aims to support the advancement of reliable and trustworthy AI systems in healthcare. The repository is available at this https URL. Y1 - 2025 U6 - https://doi.org/10.48550/arXiv.2503.16247 N1 - Der Aufsatz wurde peer-reviewed veröffentlicht und ist ebenfalls in diesem Repositorium verzeichnet unter: https://opus4.kobv.de/opus4-oth-regensburg/8467 ER - TY - JOUR A1 - Weiherer, Maximilian A1 - Eigenberger, Andreas A1 - Egger, Bernhard A1 - Brébant, Vanessa A1 - Prantl, Lukas A1 - Palm, Christoph T1 - Learning the shape of female breasts: an open-access 3D statistical shape model of the female breast built from 110 breast scans JF - The Visual Computer N2 - We present the Regensburg Breast Shape Model (RBSM)—a 3D statistical shape model of the female breast built from 110 breast scans acquired in a standing position, and the first publicly available. Together with the model, a fully automated, pairwise surface registration pipeline used to establish dense correspondence among 3D breast scans is introduced. Our method is computationally efficient and requires only four landmarks to guide the registration process. A major challenge when modeling female breasts from surface-only 3D breast scans is the non-separability of breast and thorax. In order to weaken the strong coupling between breast and surrounding areas, we propose to minimize the variance outside the breast region as much as possible. To achieve this goal, a novel concept called breast probability masks (BPMs) is introduced. A BPM assigns probabilities to each point of a 3D breast scan, telling how likely it is that a particular point belongs to the breast area. During registration, we use BPMs to align the template to the target as accurately as possible inside the breast region and only roughly outside. This simple yet effective strategy significantly reduces the unwanted variance outside the breast region, leading to better statistical shape models in which breast shapes are quite well decoupled from the thorax. The RBSM is thus able to produce a variety of different breast shapes as independently as possible from the shape of the thorax. Our systematic experimental evaluation reveals a generalization ability of 0.17 mm and a specificity of 2.8 mm. To underline the expressiveness of the proposed model, we finally demonstrate in two showcase applications how the RBSM can be used for surgical outcome simulation and the prediction of a missing breast from the remaining one. Our model is available at https://www.rbsm.re-mic.de/. KW - Statistical shape model KW - Non-rigid surface registration KW - Breast imaging KW - Surgical outcome simulation KW - Breast reconstruction surgery Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:898-opus4-30506 N1 - Corresponding author: Christoph Palm N1 - Zugehörige arXiv-Publikation: https://opus4.kobv.de/opus4-oth-regensburg/frontdoor/index/index/docId/2023 VL - 39 IS - 4 SP - 1597 EP - 1616 PB - Springer Nature ER - TY - JOUR A1 - Maier, Johannes A1 - Weiherer, Maximilian A1 - Huber, Michaela A1 - Palm, Christoph T1 - Imitating human soft tissue on basis of a dual-material 3D print using a support-filled metamaterial to provide bimanual haptic for a hand surgery training system JF - Quantitative Imaging in Medicine and Surgery N2 - Background: Currently, it is common practice to use three-dimensional (3D) printers not only for rapid prototyping in the industry, but also in the medical area to create medical applications for training inexperienced surgeons. In a clinical training simulator for minimally invasive bone drilling to fix hand fractures with Kirschner-wires (K-wires), a 3D-printed hand phantom must not only be geometrically but also haptically correct. Due to a limited view during an operation, surgeons need to perfectly localize underlying risk structures only by feeling of specific bony protrusions of the human hand. Methods: The goal of this experiment is to imitate human soft tissue with its haptic and elasticity for a realistic hand phantom fabrication, using only a dual-material 3D printer and support-material-filled metamaterial between skin and bone. We present our workflow to generate lattice structures between hard bone and soft skin with iterative cube edge (CE) or cube face (CF) unit cells. Cuboid and finger shaped sample prints with and without inner hard bone in different lattice thickness are constructed and 3D printed. Results: The most elastic available rubber-like material is too firm to imitate soft tissue. By reducing the amount of rubber in the inner volume through support material (SUP), objects become significantly softer. Without metamaterial, after disintegration, the SUP can be shifted through the volume and thus the body loses its original shape. Although the CE design increases the elasticity, it cannot restore the fabric form. In contrast to CE, the CF design increases not only the elasticity but also guarantees a local limitation of the SUP. Therefore, the body retains its shape and internal bones remain in its intended place. Various unit cell sizes, lattice thickening and skin thickness regulate the rubber material and SUP ratio. Test prints with higher SUP and lower rubber material percentage appear softer and vice versa. This was confirmed by an expert surgeon evaluation. Subjects adjudged pure rubber-like material as too firm and samples only filled with SUP or lattice structure in CE design as not suitable for imitating tissue. 3D-printed finger samples in CF design were rated as realistic compared to the haptic of human tissue with a good palpable bone structure. Conclusions: We developed a new dual-material 3D print technique to imitate soft tissue of the human hand with its haptic properties. Blowy SUP is trapped within a lattice structure to soften rubber-like 3D print material, which makes it possible to reproduce a realistic replica of human hand soft tissue. KW - Dual-material 3D printing KW - Hand surgery training KW - Metamaterial KW - Support material KW - Tissue-imitating hand phantom KW - Handchirurgie KW - 3D-Druck KW - Biomaterial KW - Lernprogramm Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:898-opus4-979 N1 - Corresponding author: Christoph Palm VL - 9 IS - 1 SP - 30 EP - 42 PB - AME Publishing Company ER - TY - JOUR A1 - Maier, Johannes A1 - Weiherer, Maximilian A1 - Huber, Michaela A1 - Palm, Christoph T1 - Optically tracked and 3D printed haptic phantom hand for surgical training system JF - Quantitative Imaging in Medicine and Surgery N2 - Background: For surgical fixation of bone fractures of the human hand, so-called Kirschner-wires (K-wires) are drilled through bone fragments. Due to the minimally invasive drilling procedures without a view of risk structures like vessels and nerves, a thorough training of young surgeons is necessary. For the development of a virtual reality (VR) based training system, a three-dimensional (3D) printed phantom hand is required. To ensure an intuitive operation, this phantom hand has to be realistic in both, its position relative to the driller as well as in its haptic features. The softest 3D printing material available on the market, however, is too hard to imitate human soft tissue. Therefore, a support-material (SUP) filled metamaterial is used to soften the raw material. Realistic haptic features are important to palpate protrusions of the bone to determine the drilling starting point and angle. An optical real-time tracking is used to transfer position and rotation to the training system. Methods: A metamaterial already developed in previous work is further improved by use of a new unit cell. Thus, the amount of SUP within the volume can be increased and the tissue is softened further. In addition, the human anatomy is transferred to the entire hand model. A subcutaneous fat layer and penetration of air through pores into the volume simulate shiftability of skin layers. For optical tracking, a rotationally symmetrical marker attached to the phantom hand with corresponding reference marker is developed. In order to ensure trouble-free position transmission, various types of marker point applications are tested. Results: Several cuboid and forearm sample prints lead to a final 30 centimeter long hand model. The whole haptic phantom could be printed faultless within about 17 hours. The metamaterial consisting of the new unit cell results in an increased SUP share of 4.32%. Validated by an expert surgeon study, this allows in combination with a displacement of the uppermost skin layer a good palpability of the bones. Tracking of the hand marker in dodecahedron design works trouble-free in conjunction with a reference marker attached to the worktop of the training system. Conclusions: In this work, an optically tracked and haptically correct phantom hand was developed using dual-material 3D printing, which can be easily integrated into a surgical training system. KW - Handchirurgie KW - 3D-Druck KW - Lernprogramm KW - Zielverfolgung KW - HaptiVisT KW - Dual-material 3D printing KW - hand surgery training KW - metamaterial KW - tissue imitating phantom hand Y1 - 2020 U6 - https://doi.org/10.21037/qims.2019.12.03 N1 - Corresponding author: Christoph Palm VL - 10 IS - 02 SP - 340 EP - 455 PB - AME Publishing Company CY - Hong Kong, China ER - TY - CHAP A1 - Franz, Daniela A1 - Dreher, Maria A1 - Prinzen, Martin A1 - Teßmann, Matthias A1 - Palm, Christoph A1 - Katzky, Uwe A1 - Perret, Jerome A1 - Hofer, Mathias A1 - Wittenberg, Thomas T1 - CT-basiertes virtuelles Fräsen am Felsenbein BT - Bild- und haptischen Wiederholfrequenzen bei unterschiedlichen Rendering Methoden T2 - Bildverarbeitung für die Medizin 2018; Algorithmen - Systeme - Anwendungen. Proceedings des Workshops vom 11. bis 13. März 2018 in Erlangen N2 - Im Rahmen der Entwicklung eines haptisch-visuellen Trainingssystems für das Fräsen am Felsenbein werden ein Haptikarm und ein autostereoskopischer 3D-Monitor genutzt, um Chirurgen die virtuelle Manipulation von knöchernen Strukturen im Kontext eines sog. Serious Game zu ermöglichen. Unter anderem sollen Assistenzärzte im Rahmen ihrer Ausbildung das Fräsen am Felsenbein für das chirurgische Einsetzen eines Cochlea-Implantats üben können. Die Visualisierung des virtuellen Fräsens muss dafür in Echtzeit und möglichst realistisch modelliert, implementiert und evaluiert werden. Wir verwenden verschiedene Raycasting Methoden mit linearer und Nearest Neighbor Interpolation und vergleichen die visuelle Qualität und die Bildwiederholfrequenzen der Methoden. Alle verglichenen Verfahren sind sind echtzeitfähig, unterscheiden sich aber in ihrer visuellen Qualität. KW - Felsenbein KW - Fräsen KW - Virtualisierung KW - Computertomographie KW - Computerassistierte Chirurgie Y1 - 2018 SN - 978-3-662-56537-7 U6 - https://doi.org/10.1007/978-3-662-56537-7_51 SP - 176 EP - 181 PB - Springer CY - Berlin ER - TY - CHAP A1 - Maier, Johannes A1 - Huber, Michaela A1 - Katzky, Uwe A1 - Perret, Jerome A1 - Wittenberg, Thomas A1 - Palm, Christoph T1 - Force-Feedback-assisted Bone Drilling Simulation Based on CT Data T2 - Bildverarbeitung für die Medizin 2018; Algorithmen - Systeme - Anwendungen; Proceedings des Workshops vom 11. bis 13. März 2018 in Erlangen N2 - In order to fix a fracture using minimally invasive surgery approaches, surgeons are drilling complex and tiny bones with a 2 dimensional X-ray as single imaging modality in the operating room. Our novel haptic force-feedback and visual assisted training system will potentially help hand surgeons to learn the drilling procedure in a realistic visual environment. Within the simulation, the collision detection as well as the interaction between virtual drill, bone voxels and surfaces are important. In this work, the chai3d collision detection and force calculation algorithms are combined with a physics engine to simulate the bone drilling process. The chosen Bullet-Physics-Engine provides a stable simulation of rigid bodies, if the collision model of the drill and the tool holder is generated as a compound shape. Three haptic points are added to the K-wire tip for removing single voxels from the bone. For the drilling process three modes are proposed to emulate the different phases of drilling in restricting the movement of a haptic device. KW - Handchirurgie KW - Osteosynthese KW - Simulation KW - Lernprogramm Y1 - 2018 U6 - https://doi.org/10.1007/978-3-662-56537-7_78 SP - 291 EP - 296 PB - Springer CY - Berlin ER - TY - CHAP A1 - Eixelberger, Thomas A1 - Wittenberg, Thomas A1 - Perret, Jerome A1 - Katzky, Uwe A1 - Simon, Martina A1 - Schmitt-Rüth, Stephanie A1 - Hofer, Mathias A1 - Sorge, M. A1 - Jacob, R. A1 - Engel, Felix B. A1 - Gostian, A. A1 - Palm, Christoph A1 - Franz, Daniela T1 - A haptic model for virtual petrosal bone milling T2 - 17. Jahrestagung der Deutschen Gesellschaft für Computer- und Roboterassistierte Chirurgie (CURAC2018), Tagungsband, 2018, Leipzig, 13.-15. September N2 - Virtual training of bone milling requires realtime and realistic haptics of the interaction between the ”virtual mill” and a ”virtual bone”. We propose an exponential abrasion model between a virtual one and the mill bit and combine it with a coarse representation of the virtual bone and the mill shaft for collision detection using the Bullet Physics Engine. We compare our exponential abrasion model to a widely used linear abrasion model and evaluate it quantitatively and qualitatively. The evaluation results show, that we can provide virtual milling in real-time, with an abrasion behavior similar to that proposed in the literature and with a realistic feeling of five different surgeons. KW - Osteosynthese KW - Simulation KW - Lernprogramm Y1 - 2018 UR - https://www.curac.org/images/advportfoliopro/images/CURAC2018/CURAC 2018 Tagungsband.pdf VL - 17 SP - 214 EP - 219 ER - TY - CHAP A1 - Palm, Christoph A1 - Schanze, Thomas T1 - Biomedical Image and Signal Computing (BISC 2013) T2 - 58. Jahrestagung der Deutschen Gesellschaft für Medizinische Informatik, Biometrie und Epidemiologie e.V. (GMDS 2013), Lübeck, 01.-05.09.2013 Y1 - 2013 U6 - https://doi.org/doi:10.3205/13gmds257 N1 - Meeting Abstract IS - DocAbstr. 324 PB - German Medical Science GMS Publishing House CY - Düsseldorf ER - TY - JOUR A1 - Ebigbo, Alanna A1 - Mendel, Robert A1 - Probst, Andreas A1 - Manzeneder, Johannes A1 - Souza Jr., Luis Antonio de A1 - Papa, João Paulo A1 - Palm, Christoph A1 - Messmann, Helmut T1 - Computer-aided diagnosis using deep learning in the evaluation of early oesophageal adenocarcinoma JF - GuT N2 - Computer-aided diagnosis using deep learning (CAD-DL) may be an instrument to improve endoscopic assessment of Barrett’s oesophagus (BE) and early oesophageal adenocarcinoma (EAC). Based on still images from two databases, the diagnosis of EAC by CAD-DL reached sensitivities/specificities of 97%/88% (Augsburg data) and 92%/100% (Medical Image Computing and Computer-Assisted Intervention [MICCAI] data) for white light (WL) images and 94%/80% for narrow band images (NBI) (Augsburg data), respectively. Tumour margins delineated by experts into images were detected satisfactorily with a Dice coefficient (D) of 0.72. This could be a first step towards CAD-DL for BE assessment. If developed further, it could become a useful adjunctive tool for patient management. KW - Speiseröhrenkrebs KW - Diagnose KW - Computerunterstütztes Verfahren KW - Maschinelles Lernen Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:898-opus4-68 N1 - Corresponding authors: Alanna Ebigbo and Christoph Palm VL - 68 IS - 7 SP - 1143 EP - 1145 PB - British Society of Gastroenterology ER - TY - JOUR A1 - Wöhl, Rebecca A1 - Maier, Johannes A1 - Gehmert, Sebastian A1 - Palm, Christoph A1 - Riebschläger, Birgit A1 - Nerlich, Michael A1 - Huber, Michaela T1 - 3D Analysis of Osteosyntheses Material using semi-automated CT Segmentation BT - a case series of a 4 corner fusion plate JF - BMC Musculoskeletal Disorders N2 - Backround Scaphoidectomy and midcarpal fusion can be performed using traditional fixation methods like K-wires, staples, screws or different dorsal (non)locking arthrodesis systems. The aim of this study is to test the Aptus four corner locking plate and to compare the clinical findings to the data revealed by CT scans and semi-automated segmentation. Methods: This is a retrospective review of eleven patients suffering from scapholunate advanced collapse (SLAC) or scaphoid non-union advanced collapse (SNAC) wrist, who received a four corner fusion between August 2011 and July 2014. The clinical evaluation consisted of measuring the range of motion (ROM), strength and pain on a visual analogue scale (VAS). Additionally, the Disabilities of the Arm, Shoulder and Hand (QuickDASH) and the Mayo Wrist Score were assessed. A computerized tomography (CT) of the wrist was obtained six weeks postoperatively. After semi-automated segmentation of the CT scans, the models were post processed and surveyed. Results During the six-month follow-up mean range of motion (ROM) of the operated wrist was 60°, consisting of 30° extension and 30° flexion. While pain levels decreased significantly, 54% of grip strength and 89% of pinch strength were preserved compared to the contralateral healthy wrist. Union could be detected in all CT scans of the wrist. While X-ray pictures obtained postoperatively revealed no pathology, two user related technical complications were found through the 3D analysis, which correlated to the clinical outcome. Conclusion Due to semi-automated segmentation and 3D analysis it has been proved that the plate design can keep up to the manufacturers’ promises. Over all, this case series confirmed that the plate can compete with the coexisting techniques concerning clinical outcome, union and complication rate. KW - Handchirurgie KW - Osteosynthese KW - Arthrodese KW - 4FC KW - SLAC wrist KW - SNAC wrist KW - Semi-automated segmentation KW - 3D analysis KW - Computertomographie KW - Bildsegmentierung KW - Dreidimensionale Bildverarbeitung Y1 - 2018 U6 - https://doi.org/10.1186/s12891-018-1975-0 VL - 19 SP - 1 EP - 8 PB - Springer Nature ER - TY - GEN A1 - Scheppach, Markus W. A1 - Mendel, Robert A1 - Probst, Andreas A1 - Rauber, David A1 - Rückert, Tobias A1 - Meinikheim, Michael A1 - Palm, Christoph A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - Real-time detection and delineation of tissue during third-space endoscopy using artificial intelligence (AI) T2 - Endoscopy N2 - Aims  AI has proven great potential in assisting endoscopists in diagnostics, however its role in therapeutic endoscopy remains unclear. Endoscopic submucosal dissection (ESD) is a technically demanding intervention with a slow learning curve and relevant risks like bleeding and perforation. Therefore, we aimed to develop an algorithm for the real-time detection and delineation of relevant structures during third-space endoscopy. Methods  5470 still images from 59 full length videos (47 ESD, 12 POEM) were annotated. 179681 additional unlabeled images were added to the training dataset. Consequently, a DeepLabv3+ neural network architecture was trained with the ECMT semi-supervised algorithm (under review elsewhere). Evaluation of vessel detection was performed on a dataset of 101 standardized video clips from 15 separate third-space endoscopy videos with 200 predefined blood vessels. Results  Internal validation yielded an overall mean Dice score of 85% (68% for blood vessels, 86% for submucosal layer, 88% for muscle layer). On the video test data, the overall vessel detection rate (VDR) was 94% (96% for ESD, 74% for POEM). The median overall vessel detection time (VDT) was 0.32 sec (0.3 sec for ESD, 0.62 sec for POEM). Conclusions  Evaluation of the developed algorithm on a video test dataset showed high VDR and quick VDT, especially for ESD. Further research will focus on a possible clinical benefit of the AI application for VDR and VDT during third-space endoscopy. KW - Speiseröhrenkrankheit KW - Künstliche Intelligenz KW - Artificial Intelligence Y1 - 2023 U6 - https://doi.org/10.1055/s-0043-1765128 VL - 55 IS - S02 SP - S53 EP - S54 PB - Thieme ER - TY - CHAP A1 - Franz, Daniela A1 - Katzky, Uwe A1 - Neumann, Sabine A1 - Perret, Jerome A1 - Hofer, Mathias A1 - Huber, Michaela A1 - Schmitt-Rüth, Stephanie A1 - Haug, Sonja A1 - Weber, Karsten A1 - Prinzen, Martin A1 - Palm, Christoph A1 - Wittenberg, Thomas T1 - Haptisches Lernen für Cochlea Implantationen BT - Konzept - HaptiVisT Projekt T2 - 15. Jahrestagung der Deutschen Gesellschaft für Computer- und Roboterassistierte Chirurgie (CURAC2016), Tagungsband, 2016, Bern, 29.09. - 01.10. N2 - Die Implantation eines Cochlea Implantates benötigt einen chirurgischen Zugang im Felsenbein und durch die Paukenhöhle des Patienten. Der Chirurg hat eine eingeschränkte Sicht im Operationsgebiet, die weiterhin viele Risikostrukturen enthält. Um eine Cochlea Implantation sicher und fehlerfrei durchzuführen, ist eine umfangreiche theoretische und praktische (teilweise berufsbegleitende) Fortbildung sowie langjährige Erfahrung notwendig. Unter Nutzung von realen klinischen CT/MRT Daten von Innen- und Mittelohr und der interaktiven Segmentierung der darin abgebildeten Strukturen (Nerven, Cochlea, Gehörknöchelchen,...) wird im HaptiVisT Projekt ein haptisch-visuelles Trainingssystem für die Implantation von Innen- und Mittelohr-Implantaten realisiert, das als sog. „Serious Game“ mit immersiver Didaktik gestaltet wird. Die Evaluierung des Demonstrators hinsichtlich Zweckmäßigkeit erfolgt prozessbegleitend und ergebnisorientiert, um mögliche technische oder didaktische Fehler vor Fertigstellung des Systems aufzudecken. Drei zeitlich versetzte Evaluationen fokussieren dabei chirurgisch-fachliche, didaktische sowie haptisch-ergonomische Akzeptanzkriterien. KW - Virtuelles Training KW - Haptisches Feedback KW - Gamification in der Medizin KW - Cochlea-Implantat KW - Operationstechnik KW - Simulation KW - Haptische Feedback-Technologie KW - Lernprogramm Y1 - 2016 UR - https://curac.org/images/advportfoliopro/images/CURAC2016/CURAC%202016%20Tagungsband.pdf SP - 21 EP - 26 ER - TY - CHAP A1 - Souza Jr., Luis Antonio de A1 - Hook, Christian A1 - Papa, João Paulo A1 - Palm, Christoph T1 - Barrett's Esophagus Analysis Using SURF Features T2 - Bildverarbeitung für die Medizin 2017; Algorithmen - Systeme - Anwendungen. Proceedings des Workshops vom 12. bis 14. März 2017 in Heidelberg N2 - The development of adenocarcinoma in Barrett’s esophagus is difficult to detect by endoscopic surveillance of patients with signs of dysplasia. Computer assisted diagnosis of endoscopic images (CAD) could therefore be most helpful in the demarcation and classification of neoplastic lesions. In this study we tested the feasibility of a CAD method based on Speeded up Robust Feature Detection (SURF). A given database containing 100 images from 39 patients served as benchmark for feature based classification models. Half of the images had previously been diagnosed by five clinical experts as being ”cancerous”, the other half as ”non-cancerous”. Cancerous image regions had been visibly delineated (masked) by the clinicians. SURF features acquired from full images as well as from masked areas were utilized for the supervised training and testing of an SVM classifier. The predictive accuracy of the developed CAD system is illustrated by sensitivity and specificity values. The results based on full image matching where 0.78 (sensitivity) and 0.82 (specificity) were achieved, while the masked region approach generated results of 0.90 and 0.95, respectively. KW - Speiseröhrenkrankheit KW - Diagnose KW - Maschinelles Sehen KW - Automatische Klassifikation Y1 - 2017 U6 - https://doi.org/10.1007/978-3-662-54345-0_34 SP - 141 EP - 146 PB - Springer CY - Berlin ER -