TY - JOUR A1 - Huber, Michaela A1 - Schlosser, Daniela A1 - Stenzel, Susanne A1 - Maier, Johannes A1 - Pattappa, Girish A1 - Kujat, Richard A1 - Striegl, Birgit A1 - Docheva, Denitsa T1 - Quantitative Analysis of Surface Contouring with Pulsed Bipolar Radiofrequency on Thin Chondromalacic Cartilage JF - BioMed Research International N2 - The purpose of this study was to evaluate the quality of surface contouring of chondromalacic cartilage by bipolar radio frequency energy using different treatment patterns in an animal model, as well as examining the impact of the treatment onto chondrocyte viability by two different methods. Our experiments were conducted on 36 fresh osteochondral sections from the tibia plateau of slaughtered 6-month-old pigs, where the thickness of the cartilage is similar to that of human wrist cartilage. An area of 1 cm(2) was first treated with emery paper to simulate the chondromalacic cartilage. Then, the treatment with RFE followed in 6 different patterns. The osteochondral sections were assessed for cellular viability (live/dead assay, caspase (cell apoptosis marker) staining, and quantitative analysed images obtained by fluorescent microscopy). For a quantitative characterization of none or treated cartilage surfaces, various roughness parameters were measured using confocal laser scanning microscopy (Olympus LEXT OLS 4000 3D). To describe the roughness, the Root-Mean-Square parameter (Sq) was calculated. A smoothing effect of the cartilage surface was detectable upon each pattern of RFE treatment. The Sq for native cartilage was Sq=3.8 +/- 1.1 mu m. The best smoothing pattern was seen for two RFE passes and a 2-second pulsed mode (B2p2) with an Sq=27.3 +/- 4.9 mu m. However, with increased smoothing, an augmentation in chondrocyte death up to 95% was detected. Using bipolar RFE treatment in arthroscopy for small joints like the wrist or MCP joints should be used with caution. In the case of chondroplasty, there is a high chance to destroy the joint cartilage. KW - CHONDROCYTE DEATH KW - energy KW - HUMAN ARTICULAR-CARTILAGE KW - MONOPOLAR KW - THERMAL CHONDROPLASTY Y1 - 2020 U6 - https://doi.org/10.1155/2020/1242086 SP - 1 EP - 8 PB - HINDAWI ER - TY - THES A1 - Putzer, Michael T1 - Development of subject-specific musculoskeletal models for studies of lumbar loading N2 - Anatomical differences between individuals are often neglected in musculoskeletal models, but they are necessary in case of subject-specific questions regarding the lumbar spine. A modification of models to each subject is complex and the effects on lumbar loading are difficult to assess. The objective of this thesis is to create a validated musculoskeletal model of the human body, which facilitates a subject-specific modification of the geometry of the lumbar spine. Furthermore, important parameters are identified in sensitivity studies and a case study regarding multifidus muscle atrophy after a disc herniation is conducted. Therefore, a generic model is heavily modified and a semi-automatic process is implemented. This procedure remodels the geometry of the lumbar spine to a subject-specific one on basis of segmented medical images. The resulting five models are validated with regard to the lumbar loading at the L4/L5 level. The influence of lumbar ligament stiffness is determined by changing the stiffness values of all lumbar ligaments in eleven steps during a flexion motion. Sensitivities of lumbar loading to an altered geometry of the lumbar spine are identified by varying ten lumbar parameters in simulations with each model in four postures. The case study includes an analysis of the loading of the multifidus muscle and of the lumbar discs throughout various stages of disc herniation. This time each model performs four motions with two different motion rhythms. The results indicate that lumbar motion and loading is dependent on lumbar ligament stiffness. Furthermore, subject-specific modelling of the lumbar spine should include at least the vertebral height, disc height and lumbar lordosis. The results of the case study suggest that an overloading of the multifidus muscle could follow disc herniation. Additionally, a subsequent atrophy of the muscles could expose adjacent levels to an increased loading, but these findings are highly dependent on the individual. KW - Lendenwirbelsäule KW - Geometrie KW - Lendenwirbelsäulenkrankheit KW - Computertomografie KW - Modell KW - Hochschulschrift Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:706-5924 N1 - Kooperative Promotion an der Fakultät Maschinenbau der Ostbayerischen Technischen Hochschule Regensburg (OTH.R) im Labor für Biomechanik (LBM) bei Prof. Dr.-Ing. Sebastian Dendorfer und Labor für Faserverbundtechnik (LFT) bei Prof. Dr.-Ing. Ingo Ehrlich ER - TY - GEN A1 - Mendel, Robert A1 - Souza Jr., Luis Antonio de A1 - Rauber, David A1 - Papa, João Paulo A1 - Palm, Christoph T1 - Abstract: Semi-supervised Segmentation Based on Error-correcting Supervision T2 - Bildverarbeitung für die Medizin 2021. Proceedings, German Workshop on Medical Image Computing, Regensburg, March 7-9, 2021 N2 - Pixel-level classification is an essential part of computer vision. For learning from labeled data, many powerful deep learning models have been developed recently. In this work, we augment such supervised segmentation models by allowing them to learn from unlabeled data. Our semi-supervised approach, termed Error-Correcting Supervision, leverages a collaborative strategy. Apart from the supervised training on the labeled data, the segmentation network is judged by an additional network. KW - Deep Learning Y1 - 2021 SN - 978-3-658-33197-9 U6 - https://doi.org/10.1007/978-3-658-33198-6_43 SP - 178 PB - Springer Vieweg CY - Wiesbaden ER - TY - JOUR A1 - Souza Jr., Luis Antonio de A1 - Passos, Leandro A. A1 - Mendel, Robert A1 - Ebigbo, Alanna A1 - Probst, Andreas A1 - Messmann, Helmut A1 - Palm, Christoph A1 - Papa, João Paulo T1 - Assisting Barrett's esophagus identification using endoscopic data augmentation based on Generative Adversarial Networks JF - Computers in Biology and Medicine N2 - Barrett's esophagus figured a swift rise in the number of cases in the past years. Although traditional diagnosis methods offered a vital role in early-stage treatment, they are generally time- and resource-consuming. In this context, computer-aided approaches for automatic diagnosis emerged in the literature since early detection is intrinsically related to remission probabilities. However, they still suffer from drawbacks because of the lack of available data for machine learning purposes, thus implying reduced recognition rates. This work introduces Generative Adversarial Networks to generate high-quality endoscopic images, thereby identifying Barrett's esophagus and adenocarcinoma more precisely. Further, Convolution Neural Networks are used for feature extraction and classification purposes. The proposed approach is validated over two datasets of endoscopic images, with the experiments conducted over the full and patch-split images. The application of Deep Convolutional Generative Adversarial Networks for the data augmentation step and LeNet-5 and AlexNet for the classification step allowed us to validate the proposed methodology over an extensive set of datasets (based on original and augmented sets), reaching results of 90% of accuracy for the patch-based approach and 85% for the image-based approach. Both results are based on augmented datasets and are statistically different from the ones obtained in the original datasets of the same kind. Moreover, the impact of data augmentation was evaluated in the context of image description and classification, and the results obtained using synthetic images outperformed the ones over the original datasets, as well as other recent approaches from the literature. Such results suggest promising insights related to the importance of proper data for the accurate classification concerning computer-assisted Barrett's esophagus and adenocarcinoma detection. KW - Maschinelles Lernen KW - Barrett's esophagus KW - Machine learning KW - Adenocarcinoma KW - Generative adversarial networks KW - Neuronales Netz KW - Adenocarcinom KW - Speiseröhrenkrebs KW - Diagnose Y1 - 2020 U6 - https://doi.org/10.1016/j.compbiomed.2020.104029 VL - 126 IS - November PB - Elsevier ER - TY - JOUR A1 - Brébant, Vanessa A1 - Weiherer, Maximilian A1 - Noisser, Vivien A1 - Seitz, Stephan A1 - Prantl, Lukas A1 - Eigenberger, Andreas T1 - Implants Versus Lipograft: Analysis of Long-Term Results Following Congenital Breast Asymmetry Correction JF - Aesthetic Plastic Surgery N2 - Aims Congenital breast asymmetry represents a particular challenge to the classic techniques of plastic surgery given the young age of patients at presentation. This study reviews and compares the long-term results of traditional breast augmentation using silicone implants and the more innovative technique of lipografting. Methods To achieve this, we not only captured subjective parameters such as satisfaction with outcome and symmetry, but also objective parameters including breast vol-ume and anthropometric measurements. The objective examination was performed manually and by using the Vectra H2 photogrammetry scanning system. Results Differences between patients undergoing either implant augmentation or lipograft were revealed not to be significant with respect to patient satisfaction with surgical outcome (p= 0.55) and symmetry (p= 0.69). Furthermore, a breast symmetry of 93 % was reported in both groups. Likewise, no statistically significant volume difference between the left and right breasts was observed in both groups (p\0.41). However, lipograft patients needed on average 2.9 procedures to achieve the desired result, compared with 1.3 for implant augmentation. In contrast, patients treated with implant augmentation may require anumber of implant changes during their lifetime. Conclusion Both methods may be considered for patients presenting with congenital breast asymmetry. KW - Congenital Breast Asymmetry KW - 3D volumetry KW - Autologous fat injections KW - Three-dimensional imaging KW - Lipograft Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:898-opus4-32404 VL - 46 SP - 2228 EP - 2236 PB - Springer Nature ER - TY - JOUR A1 - Maier, Andreas A1 - Deserno, Thomas M. A1 - Handels, Heinz A1 - Maier-Hein, Klaus H. A1 - Palm, Christoph A1 - Tolxdorff, Thomas T1 - IJCARS: BVM 2021 special issue JF - International Journal of Computer Assisted Radiology and Surgery N2 - The German workshop on medical image computing (BVM) has been held in different locations in Germany for more than 20 years. In terms of content, BVM focused on the computer-aided analysis of medical image data with a wide range of applications, e.g. in the area of imaging, diagnostics, operation planning, computer-aided intervention and visualization. During this time, there have been remarkable methodological developments and upheavals, on which the BVM community has worked intensively. The area of machine learning should be emphasized, which has led to significant improvements, especially for tasks of classification and segmentation, but increasingly also in image formation and registration. As a result, work in connection with deep learning now dominates the BVM. These developments have also contributed to the establishment of medical image processing at the interface between computer science and medicine as one of the key technologies for the digitization of the health system. In addition to the presentation of current research results, a central aspect of the BVM is primarily the promotion of young scientists from the diverse BVM community, covering not only Germany but also Austria, Switzerland, The Netherland and other European neighbors. The conference serves primarily doctoral students and postdocs, but also students with excellent bachelor and master theses as a platform to present their work, to enter into professional discourse with the community, and to establish networks with specialist colleagues. Despite the many conferences and congresses that are also relevant for medical image processing, the BVM has therefore lost none of its importance and attractiveness and has retained its permanent place in the annual conference rhythm. Building on this foundation, there are some innovations and changes this year. The BVM 2021 was organized for the first time at the Ostbayerische Technische Hochschule Regensburg (OTH Regensburg, a technical university of applied sciences). After Aachen, Berlin, Erlangen, Freiburg, Hamburg, Heidelberg, Leipzig, Lübeck, and Munich, Regensburg is not just a new venue. OTH Regensburg is the first representative of the universities of applied sciences (HAW) to organize the conference, which differs to universities, university hospitals, or research centers like Fraunhofer or Helmholtz. This also considers the further development of the research landscape in Germany, where HAWs increasingly contribute to applied research in addition to their focus on teaching. This development is also reflected in the contributions submitted to the BVM in recent years. At BVM 2021, which was held in a virtual format for the first time due to the Corona pandemic, an attractive and high-quality program was offered. Fortunately, the number of submissions increased significantly. Out of 97 submissions, 26 presentations, 51 posters and 5 software demonstrations were accepted via an anonymized reviewing process with three reviews each. The three best works have been awarded BVM prizes, selected by a separate committee. Based on these high-quality submissions, we are able to present another special issue in the International Journal of Computer Assisted Radiology and Surgery (IJCARS). Out of the 97 submissions, the ones with the highest scores have been invited to submit an extended version of their paper to be presented in IJCARS. As a result, we are now able to present this special issue with seven excellent articles. Many submissions focus on machine learning in a medical context. KW - Medical Image Computing KW - Bildgebendes Verfahren KW - Medizin Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:898-opus4-21666 VL - 16 SP - 2067 EP - 2068 PB - Springer ER - TY - GEN A1 - Krefting, Dagmar A1 - Zaunseder, Sebastian A1 - Säring, Dennis A1 - Wittenberg, Thomas A1 - Palm, Christoph A1 - Schiecke, Karin A1 - Krenkel, Lars A1 - Hennemuth, Anja A1 - Schnell, Susanne A1 - Spicher, Nicolai T1 - Blutdruck, Hämodynamik und Gefäßzustand: Innovative Erfassung und Bewertung – Schwerpunkt bildbasierte Verfahren T2 - 66. Jahrestagung der Deutschen Gesellschaft für Medizinische Informatik, Biometrie und Epidemiologie e. V. (GMDS), 12. Jahreskongress der Technologie- und Methodenplattform für die vernetzte medizinische Forschung e. V. (TMF), 26. - 30.09.2021, online N2 - Einleitung: Blutdruck gilt als sogenannter Vitalparameter als einer der grundlegenden Indikatoren für den Gesundheitszustand einer Person. Sowohl zu niedriger als auch zu hoher Blutdruck kann lebensbedrohend sein, letzerer ist darüber hinaus ein Risikofaktor insbesondere für Herz-Kreislauferkrankungen, die trotz wichtiger Fortschritte in der Behandlung immer noch die häufigste Todesursache in Deutschland darstellen. Die Hämodynamik, also die raumzeitliche Dynamik des Blutflusses, und der Gefäßzustand sind eng verbunden mit dem Blutdruck und ebenfalls von hoher klinischer Relevanz, u.a. zur Identifikation von Durchblutungsstörungen und ungünstigen Druckverteilungen der Gefäßwand. Innovationen in der Messtechnik als auch in der Datenanalyse bieten heute neue Möglichkeiten der Erfassung und Bewertung von Blutdruck, Hämodynamik und Gefäßzustand [1], [2], [3], [4]. Methodik: In einer gemeinsamen Workshopserie der AG Medizinische Bild- und Signalverarbeitung der GMDS und des Fachausschusses Biosignale der DGBMT werden wir neue Ansätze und Lösungen für Mess- und Analyseverfahren zu Blutdruck und -fluss sowie zum Gefäßzustand vorstellen und diskutieren. Dabei stehen im ersten Workshop auf der GMDS Jahrestagung Bildbasierte Verfahren im Zentrum, während der zweite Workshop auf der DGBMT Jahrestagung den Fokus auf Biosignalbasierten Verfahren legt. Es werden aktuelle Forschungsergebnisse vorgestellt und diskutiert. Es sind jeweils mehrere Vorträge geplant mit ausreichend Zeit zur Diskussion. Folgende Vorträge sind geplant (Arbeitstitel): Sebastian Zaunseder: Videobasierte Erfassung des Blutdrucks Anja Hennemuth: A Visualization Toolkit for the Analysis of Aortic Anatomy and Pressure Distribution Lars Krenkel: Numerische Analyse der Rupturwahrscheinlichkeit zerebraler Aneurysmata Susanne Schnell: Messung des Blutflusses und hämodynamischer Parameter mit 4D flow MRI: Möglichkeiten und Herausforderungen Ergebnisse: Ziel des Workshops ist die Identifikation von innovativen Ansätzen und neuen Methoden zur qualitativen und quantitativen Bestimmung von hämodynamischen Parametern sowie deren kritische Bewertung durch die Community für die Eignung in der klinischen Entscheidungsunterstützung. Diskussion: Der Workshop leistet inhaltlich einen Beitrag zu zentralen Aspekten für die Herz-Kreislauf-Medizin. Er bringt dabei Expertise aus verschiedenen Bereichen zusammen und schlägt die Brücke zwischen Kardiologie, Medizininformatik und Medizintechnik. Schlussfolgerung: Innovative Technologien aus Medizintechnik und Informatik ermöglichen zunehmend einfache und raumzeitlich aufgelöste Erfassung und Bewertung wichtiger Informationen zur Unterstützung von Diagnose und Therapieverfolgung. [1] Zaunseder S, Trumpp A, Wedekind D, Malberg H. Cardiovascular assessment by imaging photoplethysmography - a review. Biomed Tech (Berl). 2018 Oct 25;63(5):617–34. [2] Huellebrand M, Messroghli D, Tautz L, Kuehne T, Hennemuth A. An extensible software platform for interdisciplinary cardiovascular imaging research. Comput Methods Programs Biomed. 2020 Feb;184:105277. [3] Schmitter S, Adriany G, Waks M, Moeller S, Aristova M, Vali A, et al. Bilateral Multiband 4D Flow MRI of the Carotid Arteries at 7T. Magn Reson Med. 2020 Oct;84(4):1947–60. [4] Birkenmaier C, and Krenkel, L. Flow in Artificial Lungs. In: New Results in Numerical and Experimental Fluid Mechanics XIII. Contributions to the 22nd STAB/DGLR Symposium. Springer; 2021. KW - Bildbasierte Verfahren KW - Blutdruck KW - Hämodynamik KW - Blutgefäß KW - Bildgebendes Verfahren Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0183-21gmds0167 ER - TY - JOUR A1 - Maier, Johannes A1 - Weiherer, Maximilian A1 - Huber, Michaela A1 - Palm, Christoph T1 - Imitating human soft tissue on basis of a dual-material 3D print using a support-filled metamaterial to provide bimanual haptic for a hand surgery training system JF - Quantitative Imaging in Medicine and Surgery N2 - Background: Currently, it is common practice to use three-dimensional (3D) printers not only for rapid prototyping in the industry, but also in the medical area to create medical applications for training inexperienced surgeons. In a clinical training simulator for minimally invasive bone drilling to fix hand fractures with Kirschner-wires (K-wires), a 3D-printed hand phantom must not only be geometrically but also haptically correct. Due to a limited view during an operation, surgeons need to perfectly localize underlying risk structures only by feeling of specific bony protrusions of the human hand. Methods: The goal of this experiment is to imitate human soft tissue with its haptic and elasticity for a realistic hand phantom fabrication, using only a dual-material 3D printer and support-material-filled metamaterial between skin and bone. We present our workflow to generate lattice structures between hard bone and soft skin with iterative cube edge (CE) or cube face (CF) unit cells. Cuboid and finger shaped sample prints with and without inner hard bone in different lattice thickness are constructed and 3D printed. Results: The most elastic available rubber-like material is too firm to imitate soft tissue. By reducing the amount of rubber in the inner volume through support material (SUP), objects become significantly softer. Without metamaterial, after disintegration, the SUP can be shifted through the volume and thus the body loses its original shape. Although the CE design increases the elasticity, it cannot restore the fabric form. In contrast to CE, the CF design increases not only the elasticity but also guarantees a local limitation of the SUP. Therefore, the body retains its shape and internal bones remain in its intended place. Various unit cell sizes, lattice thickening and skin thickness regulate the rubber material and SUP ratio. Test prints with higher SUP and lower rubber material percentage appear softer and vice versa. This was confirmed by an expert surgeon evaluation. Subjects adjudged pure rubber-like material as too firm and samples only filled with SUP or lattice structure in CE design as not suitable for imitating tissue. 3D-printed finger samples in CF design were rated as realistic compared to the haptic of human tissue with a good palpable bone structure. Conclusions: We developed a new dual-material 3D print technique to imitate soft tissue of the human hand with its haptic properties. Blowy SUP is trapped within a lattice structure to soften rubber-like 3D print material, which makes it possible to reproduce a realistic replica of human hand soft tissue. KW - Dual-material 3D printing KW - Hand surgery training KW - Metamaterial KW - Support material KW - Tissue-imitating hand phantom KW - Handchirurgie KW - 3D-Druck KW - Biomaterial KW - Lernprogramm Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:898-opus4-979 N1 - Corresponding author: Christoph Palm VL - 9 IS - 1 SP - 30 EP - 42 PB - AME Publishing Company ER - TY - GEN ED - Maier-Hein, Klaus H. ED - Deserno, Thomas M. ED - Handels, Heinz ED - Maier, Andreas ED - Palm, Christoph ED - Tolxdorff, Thomas T1 - Bildverarbeitung für die Medizin 2022 BT - Proceedings, German Workshop on Medical Image Computing, Heidelberg, June 26-28, 2022 T2 - Informatik aktuell N2 - Die Tagung Bildverarbeitung für die Medizin (BVM) wird seit weit mehr als 20 Jahren an wechselnden Orten Deutschlands veranstaltet. Inhaltlich fokussiert sich die BVM dabei auf die computergestützte Analyse medizinischer Bilddaten mit vielfältigen Anwendungsgebieten, z.B. im Bereich der Bildgebung, der Diagnostik, der Operationsplanung, der computerunterstützten Intervention und der Visualisierung. In dieser Zeit hat es bemerkenswerte methodische Weiterentwicklungen und Umbrüche gegeben, wie zum Beispiel im Bereich des maschinellen Lernens, an denen die BVM-Community intensiv mitgearbeitet hat. In der Folge dominieren inzwischen Arbeiten im Zusammenhang mit Deep Learning die BVM. Auch diese Entwicklungen haben dazu beigetragen, dass die Medizinische Bildverarbeitung an der Schnittstelle zwischen Informatik und Medizin als eine der Schlüsseltechnologien zur Digitalisierung des Gesundheitswesens etabliert ist. Zentraler Aspekt der BVM ist neben der Darstellung aktueller Forschungsergebnisse schwerpunktmäßig aus der vielfältigen deutschlandweiten BVM-Community insbesondere die Förderung des wissenschaftlichen Nachwuchses. Die Tagung dient vor allem Doktorand*innen und Postdoktorand*innen, aber auch Studierenden mit hervorragenden Bachelor- und Masterarbeiten als Plattform, um ihre Arbeiten zu präsentieren, dabei in den fachlichen Diskurs mit der Community zu treten und Netzwerke mit Fachkolleg*innen zu knüpfen. Trotz der vielen Tagungen und Kongresse, die auch für die Medizinische Bildverarbeitung relevant sind, hat die BVM deshalb nichts von ihrer Bedeutung und Anziehungskraft eingebüßt. Inhaltlich kann auch bei der BVM 2022 wieder ein attraktives und hochklassiges Programm geboten werden. Es wurden aus 88 Einreichungen über ein anonymisiertes Reviewing-Verfahren mit jeweils drei Reviews 24 Vorträge, 33 Posterbeiträge und eine Softwaredemonstration angenommen. Da aufgrund der strengen Covid Hygiene- und Abstandsregeln leider nur sehr wenige klassische Posterbeiträge zugelassen wurden, wird dieses Jahr erstmalig ein neues Format umgesetzt. Hierfür wurden 13 weitere Beiträge als e-Poster angenommen. Die besten Arbeiten werden auch in diesem Jahr mit Preisen ausgezeichnet. Die Webseite des Workshops findet sich unter https://www.bvm-workshop.org. Das Programm wird durch drei eingeladene Vorträge ergänzt: - Prof. Dr. Ullrich Köthe, Visual Learning Lab, Universität Heidelberg - Prof. Mihaela van der Schaar, University of Cambridge, UK - Prof. Dr. Stefanie Speidel, Translational Surgical Oncology, NCT Dresden Des Weiteren werden im Vorfeld der BVM drei Tutorials angeboten: - Known Operator Learning and Hybrid Machine Learning in Medical Imaging: The Past, the Present and the Future (FAU Erlangen-Nürnberg) - Advanced Deep Learning (DKFZ Heidelberg) - Hands-On Medical Image Registration (Universität zu Lübeck) An dieser Stelle möchten wir allen, die bei den umfangreichen Vorbereitungen zum Gelingen des Workshops beigetragen haben, unseren herzlichen Dank für ihr Engagement aussprechen: den Referent*innen der Gastvorträge, den Autor*innen der Beiträge, den Referent*innen der Tutorien, den Industrierepräsentant*innen, dem Programmkomitee, den Fachgesellschaften, den Mitgliedern des BVM-Organisationsteams und allen Mitarbeitenden der Abteilung Medical Image Computing des Deutschen Krebsforschungszentrums. Wir wünschen allen Teilnehmer*innen des Workshops BVM 2022 spannende neue Kontakte und inspirierende Eindrücke aus der Welt der medizinischen Bildverarbeitung. KW - Medical Image Computing KW - Machine Learning Y1 - 2022 SN - 978-3-658-36932-3 U6 - https://doi.org/10.1007/978-3-658-36932-3 PB - Springer Vieweg CY - Wiesbaden ER - TY - CHAP A1 - Weber Nunes, Danilo A1 - Hammer, Michael A1 - Hammer, Simone A1 - Uller, Wibke A1 - Palm, Christoph T1 - Classification of Vascular Malformations Based on T2 STIR Magnetic Resonance Imaging T2 - Bildverarbeitung für die Medizin 2022: Proceedings, German Workshop on Medical Image Computing, Heidelberg, June 26-28, 2022 N2 - Vascular malformations (VMs) are a rare condition. They can be categorized into high-flow and low-flow VMs, which is a challenging task for radiologists. In this work, a very heterogeneous set of MRI images with only rough annotations are used for classification with a convolutional neural network. The main focus is to describe the challenging data set and strategies to deal with such data in terms of preprocessing, annotation usage and choice of the network architecture. We achieved a classification result of 89.47 % F1-score with a 3D ResNet 18. KW - Deep Learning KW - Magnetic Resonance Imaging KW - Vascular Malformations Y1 - 2022 U6 - https://doi.org/10.1007/978-3-658-36932-3_57 SP - 267 EP - 272 PB - Springer Vieweg CY - Wiesbaden ER - TY - JOUR A1 - Deserno, Thomas M. A1 - Handels, Heinz A1 - Maier-Hein, Klaus H. A1 - Mersmann, Sven A1 - Palm, Christoph A1 - Tolxdorff, Thomas A1 - Wagenknecht, Gudrun A1 - Wittenberg, Thomas T1 - Viewpoints on Medical Image Processing BT - From Science to Application JF - Current Medical Imaging Reviews N2 - Medical image processing provides core innovation for medical imaging. This paper is focused on recent developments from science to applications analyzing the past fifteen years of history of the proceedings of the German annual meeting on medical image processing (BVM). Furthermore, some members of the program committee present their personal points of views: (i) multi-modality for imaging and diagnosis, (ii) analysis of diffusion-weighted imaging, (iii) model-based image analysis, (iv) registration of section images, (v) from images to information in digital endoscopy, and (vi) virtual reality and robotics. Medical imaging and medical image computing is seen as field of rapid development with clear trends to integrated applications in diagnostics, treatment planning and treatment. KW - Medical imaging KW - Image processing KW - Image analysis KW - Vizualization KW - Multi-modal imaging KW - Diffusion-weighted imaging KW - Model-based imaging KW - Digital endoscopy KW - Bildgebendes Verfahren KW - Bildverarbeitung KW - Medizin Y1 - 2013 U6 - https://doi.org/10.2174/1573405611309020002 VL - 9 IS - 2 SP - 79 EP - 88 ER - TY - JOUR A1 - Noisser, Vivien A1 - Eigenberger, Andreas A1 - Weiherer, Maximilian A1 - Seitz, Stephan A1 - Prantl, Lukas A1 - Brébant, Vanessa T1 - Surgery of congenital breast asymmetry - which objective parameter influences the subjective satisfaction with long-term results JF - Archives of Gynecology and Obstetrics N2 - Purpose Congenital breast asymmetry is a serious gynecological malformation for affected patients. The condition hits young women in puberty and is associated with socio-esthetic handicap, depression, and psychosexual problems. Surgical treatment is usually early in the patient's lifetime, so a long-term sustainable solution is important. Although postoperative outcome has been evaluated in several studies before, this study is the first to analyze which objective parameters have the greatest influence on subjective satisfaction with long-term results. Methods Thirty-four patients diagnosed with congenital breast asymmetry that underwent either lipofilling or implant therapy between the years of 2008 to 2019 were examined. On average, our collective comprised patients seven years after surgery. Data were mainly gathered through manual measurements, patient-reported outcome measures (Breast Q™), and breast volumetry based on 3D scans (Vectra® H2, Canfield Scientific). Results Among all analyzed parameters, only areolar diameter correlated significantly negatively with the subjective outcome satisfaction of the patient. Regarding the subjective assessment of postoperative satisfaction with similarity of the breasts, again the mean areolar diameter, but also the difference in areolar diameter and breast volume between the right and left breasts correlated significantly negatively. Conclusion Areolar diameter was revealed as being a significant factor influencing subjective long-term satisfaction in breast asymmetry patients. Moreover, 3D volumetry proves to be an effective tool to substantiate subjective patient assessments. Our findings may lead to further improvements to surgical planning and will be expanded in further studies. KW - Congenital breast asymmetry KW - Poland syndrome KW - Lipofilling KW - Silicone implant KW - 3D volumetry Y1 - 2021 U6 - https://doi.org/10.1007/s00404-021-06218-0 PB - Springer Nature ER - TY - CHAP A1 - Palm, Christoph A1 - Pietrzyk, Uwe T1 - Time-Dependent Joint Probability Speed Function for Level-Set Segmentation of Rat-Brain Slices T2 - Proceedings of the SPIE Medical Imaging 6914: Image Processing 69143U N2 - The segmentation of rat brain slices suffers from illumination inhomogeneities and staining effects. State-of-the-art level-set methods model slice and background with intensity mixture densities defining the speed function as difference between the respective probabilites. Nevertheless, the overlap of these distributions causes an inaccurate stopping at the slice border. In this work, we propose the characterisation of the border area with intensity pairs for inside and outside estimating joint intensity probabilities. Method - In contrast to global object and background models, we focus on the object border characterised by a joint mixture density. This specifies the probability of the occurance of an inside and an outside value in direct adjacency. These values are not known beforehand, because inside and outside depend on the level-set evolution and change during time. Therefore, the speed function is computed time-dependently at the position of the current zero level-set. Along this zero level-set curve, the inside and outside values are derived as mean along the curvature normal directing inside and outside the object. Advantage of the joint probability distribution is to resolve the distribution overlaps, because these are assumed to be not located at the same border position. Results - The novel time-dependent joint probability based speed function is compared expermimentally with single probability based speed functions. Two rat brains with about 40 slices are segmented and the results analysed using manual segmentations and the Tanimoto overlap measure. Improved results are recognised for both data sets. KW - Image segmentation KW - Brain KW - Visualization KW - Image processing KW - Medical imaging KW - Neuroimaging KW - Beryllium KW - Kernspintomografie KW - Histologie KW - Schnittdarstellung KW - Bildsegmentierung KW - Gehirn Y1 - 2008 U6 - https://doi.org/10.1117/12.770673 IS - 6914 SP - 69143U-1 EP - 69143U-8 ER - TY - GEN A1 - Maier, Johannes A1 - Weiherer, Maximilian A1 - Huber, Michaela A1 - Palm, Christoph ED - Handels, Heinz ED - Deserno, Thomas M. ED - Maier, Andreas ED - Maier-Hein, Klaus H. ED - Palm, Christoph ED - Tolxdorff, Thomas T1 - Abstract: Imitating Human Soft Tissue with Dual-Material 3D Printing T2 - Bildverarbeitung für die Medizin 2019, Algorithmen - Systeme - Anwendungen. Proceedings des Workshops vom 17. bis 19. März 2019 in Lübeck N2 - Currently, it is common practice to use three-dimensional (3D) printers not only for rapid prototyping in the industry, but also in the medical area to create medical applications for training inexperienced surgeons. In a clinical training simulator for minimally invasive bone drilling to fix hand fractures with Kirschner-wires (K-wires), a 3D printed hand phantom must not only be geometrically but also haptically correct. Due to a limited view during an operation, surgeons need to perfectly localize underlying risk structures only by feeling of specific bony protrusions of the human hand. KW - Handchirurgie KW - 3D-Druck KW - Lernprogramm KW - HaptiVisT Y1 - 2019 SN - 978-3-658-25325-7 U6 - https://doi.org/10.1007/978-3-658-25326-4_48 SP - 218 PB - Springer Vieweg CY - Wiesbaden ER - TY - JOUR A1 - Hartmann, Robin A1 - Weiherer, Maximilian A1 - Schiltz, Daniel A1 - Seitz, Stephan A1 - Lotter, Luisa A1 - Anker, Alexandra A1 - Palm, Christoph A1 - Prantl, Lukas A1 - Brébant, Vanessa T1 - A Novel Method of Outcome Assessment in Breast Reconstruction Surgery: Comparison of Autologous and Alloplastic Techniques Using Three-Dimensional Surface Imaging JF - Aesthetic Plastic Surgery N2 - Background Breast reconstruction is an important coping tool for patients undergoing a mastectomy. There are numerous surgical techniques in breast reconstruction surgery (BRS). Regardless of the technique used, creating a symmetric outcome is crucial for patients and plastic surgeons. Three-dimensional surface imaging enables surgeons and patients to assess the outcome’s symmetry in BRS. To discriminate between autologous and alloplastic techniques, we analyzed both techniques using objective optical computerized symmetry analysis. Software was developed that enables clinicians to assess optical breast symmetry using three-dimensional surface imaging. Methods Twenty-seven patients who had undergone autologous (n = 12) or alloplastic (n = 15) BRS received three-dimensional surface imaging. Anthropomorphic data were collected digitally using semiautomatic measurements and automatic measurements. Automatic measurements were taken using the newly developed software. To quantify symmetry, a Symmetry Index is proposed. Results Statistical analysis revealed that there is no dif- ference in the outcome symmetry between the two groups (t test for independent samples; p = 0.48, two-tailed). Conclusion This study’s findings provide a foundation for qualitative symmetry assessment in BRS using automatized digital anthropometry. In the present trial, no difference in the outcomes’ optical symmetry was detected between autologous and alloplastic approaches. KW - Breast reconstruction KW - Breast symmetry KW - Digital anthropometry KW - Mammoplastik KW - Dreidimensionale Bildverarbeitung KW - Autogene Transplantation KW - Alloplastik Y1 - 2020 U6 - https://doi.org/10.1007/s00266-020-01749-4 VL - 44 SP - 1980 EP - 1987 PB - Springer CY - Heidelberg ER - TY - CHAP A1 - Weber, Joachim A1 - Doenitz, Christian A1 - Brawanski, Alexander A1 - Palm, Christoph T1 - Data-Parallel MRI Brain Segmentation in Clinicial Use BT - Porting FSL-Fastv4 to GPGPUs T2 - Bildverarbeitung für die Medizin 2015; Algorithmen - Systeme - Anwendungen; Proceedings des Workshops vom 15. bis 17. März 2015 in Lübeck N2 - Structural MRI brain analysis and segmentation is a crucial part in the daily routine in neurosurgery for intervention planning. Exemplarily, the free software FSL-FAST (FMRIB’s Segmentation Library – FMRIB’s Automated Segmentation Tool) in version 4 is used for segmentation of brain tissue types. To speed up the segmentation procedure by parallel execution, we transferred FSL-FAST to a General Purpose Graphics Processing Unit (GPGPU) using Open Computing Language (OpenCL) [1]. The necessary steps for parallelization resulted in substantially different and less useful results. Therefore, the underlying methods were revised and adapted yielding computational overhead. Nevertheless, we achieved a speed-up factor of 3.59 from CPU to GPGPU execution, as well providing similar useful or even better results. KW - Brain Segmentation KW - Magnetic Resonance Imaging KW - Parallel Execution KW - Voxel Spacing KW - General Purpose Graphic Processing Unit KW - Kernspintomografie KW - Gehirn KW - Bildsegmentierung KW - Parallelverarbeitung Y1 - 2015 U6 - https://doi.org/10.1007/978-3-662-46224-9_67 SP - 389 EP - 394 PB - Springer CY - Berlin ER - TY - CHAP A1 - Schubert, Nicole A1 - Pietrzyk, Uwe A1 - Reißel, Martin A1 - Palm, Christoph T1 - Reduktion von Rissartefakten durch nicht-lineare Registrierung in histologischen Schnittbildern T2 - Bildverarbeitung für die Medizin 2009; Algorithmen - Systeme - Anwendungen; Proceedings des Workshops vom 22. bis 25. März 2009 in Heidelberg N2 - In dieser Arbeit wird ein Verfahren vorgestellt, das Rissartefakte, die in histologischen Rattenhirnschnitten vorkommen können, durch nicht-lineare Registrierung reduziert. Um die Optimierung in der Rissregion zu leiten, wird der Curvature Registrierungsansatz um eine Metrik basierend auf der Segmentierung der Bilder erweitert. Dabei erzielten Registrierungen mit der ausschließlichen Segmentierung des Risses bessere Ergebnisse als Registrierungen mit einer Segmentierung des gesamten Hirnschnitts. Insgesamt zeigt sich eine deutliche Verbesserung in der Rissregion, wobei der verbleibende reduzierte Riss auf die Glattheitsbedingungen des Regularisierers zurückzuführen ist. KW - Registrierung KW - Nichtlineare Optimierung KW - Bildsegmentierung KW - Gehirn KW - Schnittdarstellung Y1 - 2009 UR - http://sunsite.informatik.rwth-aachen.de/Publications/CEUR-WS/Vol-446/p410.pdf SP - 410 EP - 414 PB - Springer CY - Berlin ER - TY - GEN ED - Handels, Heinz ED - Deserno, Thomas M. ED - Maier, Andreas ED - Maier-Hein, Klaus H. ED - Palm, Christoph ED - Tolxdorff, Thomas T1 - Bildverarbeitung für die Medizin 2019 BT - Algorithmen – Systeme – Anwendungen. Proceedings des Workshops vom 17. bis 19. März 2019 in Lübeck N2 - In den letzten Jahren hat sich der Workshop "Bildverarbeitung für die Medizin" durch erfolgreiche Veranstaltungen etabliert. Ziel ist auch 2019 wieder die Darstellung aktueller Forschungsergebnisse und die Vertiefung der Gespräche zwischen Wissenschaftlern, Industrie und Anwendern. Die Beiträge dieses Bandes - einige davon in englischer Sprache - umfassen alle Bereiche der medizinischen Bildverarbeitung, insbesondere Bildgebung und -akquisition, Maschinelles Lernen, Bildsegmentierung und Bildanalyse, Visualisierung und Animation, Zeitreihenanalyse, Computerunterstützte Diagnose, Biomechanische Modellierung, Validierung und Qualitätssicherung, Bildverarbeitung in der Telemedizin u.v.m. KW - Bildgebendes Verfahren KW - Computerunterstützte Medizin KW - Bildanalyse KW - Bilderkennung KW - Computerunterstützte Medizin KW - Computerunterstützte Diagnose KW - Medizininformatik Y1 - 2019 SN - 978-3-658-25325-7 U6 - https://doi.org/10.1007/978-3-658-25326-4 SN - 1431-472X PB - Springer CY - Berlin ER - TY - JOUR A1 - Hartwig, Regine A1 - Berlet, Maximilian A1 - Czempiel, Tobias A1 - Fuchtmann, Jonas A1 - Rückert, Tobias A1 - Feussner, Hubertus A1 - Wilhelm, Dirk T1 - Bildbasierte Unterstützungsmethoden für die zukünftige Anwendung in der Chirurgie JF - Die Chirurgie N2 - Hintergrund: Die Entwicklung assistiver Technologien wird in den kommenden Jahren nicht nur in der Chirurgie von zunehmender Bedeutung sein. Die Wahrnehmung der Istsituation stellt hierbei die Grundlage jeder autonomen Handlung dar. Hierfür können unterschiedliche Sensorsysteme genutzt werden, wobei videobasierte Systeme ein besonderes Potenzial aufweisen. Methode: Anhand von Literaturangaben und auf Basis eigener Forschungsarbeiten werden zentrale Aspekte bildbasierter Unterstützungssysteme für die Chirurgie dargestellt. Hierbei wird deren Potenzial, aber auch die Limitationen der Methoden erläutert. Ergebnisse: Eine etablierte Anwendung stellt die Phasendetektion chirurgischer Eingriffe dar, für die Operationsvideos mittels neuronaler Netzwerke analysiert werden. Durch eine zeitlich gestützte und transformative Analyse konnten die Ergebnisse der Prädiktion jüngst deutlich verbessert werden. Aber auch robotische Kameraführungssysteme nutzen Bilddaten, um das Laparoskop zukünftig autonom zu navigieren. Um die Zuverlässigkeit an die hohen Anforderungen in der Chirurgie anzugleichen, müssen diese jedoch durch zusätzliche Informationen ergänzt werden. Ein vergleichbarer multimodaler Ansatz wurde bereits für die Navigation und Lokalisation bei laparoskopischen Eingriffen umgesetzt. Hierzu werden Videodaten mittels verschiedener Methoden analysiert und diese Ergebnisse mit anderen Sensormodalitäten fusioniert. Diskussion: Bildbasierte Unterstützungsmethoden sind bereits für diverse Aufgaben verfügbar und stellen einen wichtigen Aspekt für die Chirurgie der Zukunft dar. Um hier jedoch zuverlässig und für autonome Funktionen eingesetzt werden zu können, müssen sie zukünftig in multimodale Ansätze eingebettet werden, um die erforderliche Sicherheit bieten zu können. T2 - Image-based supportive measures for future application in surgery KW - Künstliche Intelligenz KW - Robotik KW - Kognitiver Operationsaal KW - Autonomie KW - Digitalisierung Y1 - 2022 U6 - https://doi.org/10.1007/s00104-022-01668-x VL - 93 SP - 956 EP - 965 PB - Springer ER - TY - JOUR A1 - Maier, Johannes A1 - Perret, Jerome A1 - Simon, Martina A1 - Schmitt-Rüth, Stephanie A1 - Wittenberg, Thomas A1 - Palm, Christoph T1 - Force-feedback assisted and virtual fixtures based K-wire drilling simulation JF - Computers in Biology and Medicine N2 - One common method to fix fractures of the human hand after an accident is an osteosynthesis with Kirschner wires (K-wires) to stabilize the bone fragments. The insertion of K-wires is a delicate minimally invasive surgery, because surgeons operate almost without a sight. Since realistic training methods are time consuming, costly and insufficient, a virtual-reality (VR) based training system for the placement of K-wires was developed. As part of this, the current work deals with the real-time bone drilling simulation using a haptic force-feedback device. To simulate the drilling, we introduce a virtual fixture based force-feedback drilling approach. By decomposition of the drilling task into individual phases, each phase can be handled individually to perfectly control the drilling procedure. We report about the related finite state machine (FSM), describe the haptic feedback of each state and explain, how to avoid jerking of the haptic force-feedback during state transition. The usage of the virtual fixture approach results in a good haptic performance and a stable drilling behavior. This was confirmed by 26 expert surgeons, who evaluated the virtual drilling on the simulator and rated it as very realistic. To make the system even more convincing, we determined real drilling feed rates through experimental pig bone drilling and transferred them to our system. Due to a constant simulation thread we can guarantee a precise drilling motion. Virtual fixtures based force-feedback calculation is able to simulate force-feedback assisted bone drilling with high quality and, thus, will have a great potential in developing medical applications. KW - Handchirurgie KW - Osteosynthese KW - Operationstechnik KW - Lernprogramm KW - Virtuelle Realität KW - Medical training system KW - Virtual fixtures KW - Virtual reality KW - Force-feedback haptic KW - Minimally invasive hand surgery KW - K-wire drilling Y1 - 2019 U6 - https://doi.org/10.1016/j.compbiomed.2019.103473 N1 - Corresponding author: Christoph Palm VL - 114 SP - 1 EP - 10 PB - Elsevier ER - TY - CHAP A1 - Eiben, Björn A1 - Kunz, Dietmar A1 - Pietrzyk, Uwe A1 - Palm, Christoph T1 - Level-Set-Segmentierung von Rattenhirn MRTs T2 - Bildverarbeitung für die Medizin 2009; Algorithmen - Systeme - Anwendungen ; Proceedings des Workshops vom 22. bis 25. März 2009 in Heidelberg N2 - In dieser Arbeit wird die Segmentierung von Gehirngewebe aus Kopfaufnahmen von Ratten mittels Level-Set-Methoden vorgeschlagen. Dazu wird ein zweidimensionaler, kontrastbasierter Ansatz zu einem dreidimensionalen, lokal an die Bildintensität adaptierten Segmentierer erweitert. Es wird gezeigt, dass mit diesem echten 3D-Ansatz die lokalen Bildstrukturen besser berücksichtigt werden können. Insbesondere Magnet-Resonanz-Tomographien (MRTs) mit globalen Helligkeitsgradienten, beispielsweise bedingt durch Oberflächenspulen, können auf diese Weise zuverlässiger und ohne weitere Vorverarbeitungsschritte segmentiert werden. Die Leistungsfähigkeit des Algorithmus wird experimentell an Hand dreier Rattenhirn-MRTs demonstriert. KW - Dreidimensionale Bildverarbeitung KW - Schnittdarstellung KW - Gehirn Y1 - 2009 UR - http://sunsite.informatik.rwth-aachen.de/Publications/CEUR-WS/Vol-446/p167.pdf SP - 167 EP - 171 PB - Springer CY - Berlin ER - TY - GEN ED - Palm, Christoph ED - Deserno, Thomas M. ED - Handels, Heinz ED - Maier, Andreas ED - Maier-Hein, Klaus H. ED - Tolxdorff, Thomas T1 - Bildverarbeitung für die Medizin 2021 BT - Proceedings, German Workshop on Medical Image Computing, Regensburg, March 7–9, 2021 N2 - In den letzten Jahren hat sich der Workshop "Bildverarbeitung für die Medizin" durch erfolgreiche Veranstaltungen etabliert. Ziel ist auch 2021 wieder die Darstellung aktueller Forschungsergebnisse und die Vertiefung der Gespräche zwischen Wissenschaftlern, Industrie und Anwendern. Die Beiträge dieses Bandes - einige davon in englischer Sprache - umfassen alle Bereiche der medizinischen Bildverarbeitung, insbesondere Bildgebung und -akquisition, Maschinelles Lernen, Bildsegmentierung und Bildanalyse, Visualisierung und Animation, Zeitreihenanalyse, Computerunterstützte Diagnose, Biomechanische Modellierung, Validierung und Qualitätssicherung, Bildverarbeitung in der Telemedizin u.v.m. KW - Bildanalyse KW - Bildverarbeitung KW - Computerunterstützte Medizin KW - Deep Learning KW - Visualisierung Y1 - 2021 SN - 978-3-658-33197-9 U6 - https://doi.org/10.1007/978-3-658-33198-6 SN - 1431-472X PB - Springer Vieweg CY - Wiesbdaden ER - TY - CHAP A1 - Palm, Christoph A1 - Neuschaefer-Rube, C. A1 - Lehmann, Thomas M. A1 - Spitzer, Klaus ED - Evers, H. ED - Glombitza, G. ED - Lehmann, Thomas M. ED - Meinzer, H.-P. T1 - Wissensbasierte Bewegungskompensation in aktiven Konturmodellen T2 - Bildverarbeitung für die Medizin N2 - Zur Analyse von Lippenbewegungsabläufen wird ein aktives Konturmodell eingesetzt. Probleme bereitet die hohe Sprechgeschwindigkeit, die in star ken Objektverschiebungen result iert und bislang nicht durch eine alleinige Konturanpassung kompensiert werden kann. In diesem Beitrag werden die klassischen aktiven Konturmodelle um eine Vorjustierung der Grobkonturen erweitert, die eine energiebasierte Konturanpassung erst möglich macht. Die Schätzung der Verschiebung zur Vorjustierung basiert auf dem Gradientenbild und einem prädikatenlogisch formulierten Regelwerk, das Annahmen und Nebenbedingungen als Wissensbasis enthält. Mit Hilfe dieser Erweiterungen ist eine automatisierte Konturverfolgung der Lippen möglich. KW - Aktives Konturmodell KW - Pradikatenlogik KW - Bewegungsschatzung Y1 - 1999 U6 - https://doi.org/10.1007/978-3-642-60125-5_2 SP - 8 EP - 12 PB - Springer CY - Berlin ER - TY - CHAP A1 - Pietrzyk, Uwe A1 - Palm, Christoph A1 - Beyer, Thomas T1 - Investigation of fusion strategies of multi-modality images T2 - IEEE Nuclear Science Symposium Conference Record N2 - Presenting images from different modalities seems to be a trivial task considering the challenges to obtain registered images as a pre-requisite for image fusion. In combined tomographs like PET/CT, image registration is intrinsic. However, informative image fusion mandates careful preparation owing to the large amount of information that is presented to the observer. In complex imaging situations it is required to provide tools that are easy to handle and still powerful enough to help the observer discriminating important details from background patterns. We investigated several options for color tables applied to brain and non-brain images obtained with PET, MRI and CT. KW - Positron emission tomography KW - Biomedical imaging KW - Medical diagnostic imaging KW - Image fusion KW - Computed tomography KW - Image registration KW - Visualization KW - Table lookup KW - Humans Y1 - 2004 U6 - https://doi.org/10.1109/NSSMIC.2004.1462740 VL - 4 SP - 2399 EP - 2401 ER - TY - JOUR A1 - Matusch, Andreas A1 - Depboylu, Candan A1 - Palm, Christoph A1 - Wu, Bei A1 - Höglinger, Günter U. A1 - Schäfer, Martin K.-H. A1 - Becker, Johanna Sabine T1 - Cerebral bio-imaging of Cu, Fe, Zn and Mn in the MPTP mouse model of Parkinsons disease using laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) JF - Journal of the American Society for Mass Spectrometry N2 - Laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) has been established as a powerful technique for the determination of metal and nonmetal distributions within biological systems with high sensitivity. An imaging LA-ICP-MS technique for Fe, Cu, Zn, and Mn was developed to produce large series of quantitative element maps in native brain sections of mice subchronically intoxicated with 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridin (MPTP) as a model of Parkinson’s disease. Images were calibrated using matrix-matched laboratory standards. A software solution allowing a precise delineation of anatomical structures was implemented. Coronal brain sections were analyzed crossing the striatum and the substantia nigra, respectively. Animals sacrificed 2 h, 7 d, or 28 d after the last MPTP injection and controls were investigated. We observed significant decreases of Cu concentrations in the periventricular zone and the fascia dentata at 2 h and 7d and a recovery or overcompensation at 28 d, most pronounced in the rostral periventricular zone (+40%). In the cortex Cu decreased slightly to −10%. Fe increased in the interpeduncular nucleus (+40%) but not in the substantia nigra. This pattern is in line with a differential regulation of periventricular and parenchymal Cu, and with the histochemical localization of Fe, and congruent to regions of preferential MPTP binding described in the rodent brain. The LA-ICP-MS technique yielded valid and statistically robust results in the present study on 39 slices from 19 animals. Our findings underline the value of routine micro-local analytical techniques in the life sciences and affirm a role of Cu availability in Parkinson’s disease. KW - Inductively Couple Plasma Mass Spectrometry KW - Substantia Nigra KW - MPTP KW - Laser Ablation Inductively Couple Plasma Mass Spectrometry KW - MPTP Treatment KW - ICP-Massenspektrometrie KW - Metalle KW - Gehirnkarte KW - MPTP Y1 - 2010 U6 - https://doi.org/10.1016/j.jasms.2009.09.022 VL - 21 IS - 1 SP - 161 EP - 171 ER - TY - CHAP A1 - Palm, Christoph A1 - Siegmund, Heiko A1 - Semmelmann, Matthias A1 - Grafe, Claudia A1 - Evert, Matthias A1 - Schroeder, Josef A. T1 - Interactive Computer-assisted Approach for Evaluation of Ultrastructural Cilia Abnormalities T2 - Medical Imaging 2016: Computer-Aided Diagnosis, San Diego, California, United States, 27 February - 3 March, SPIE Proceedings 97853N, 2016, ISBN 9781510600201 N2 - Introduction – Diagnosis of abnormal cilia function is based on ultrastructural analysis of axoneme defects, especialy the features of inner and outer dynein arms which are the motors of ciliar motility. Sub-optimal biopsy material, methodical, and intrinsic electron microscopy factors pose difficulty in ciliary defects evaluation. We present a computer-assisted approach based on state-of-the-art image analysis and object recognition methods yielding a time-saving and efficient diagnosis of cilia dysfunction. Method – The presented approach is based on a pipeline of basal image processing methods like smoothing, thresholding and ellipse fitting. However, integration of application specific knowledge results in robust segmentations even in cases of image artifacts. The method is build hierarchically starting with the detection of cilia within the image, followed by the detection of nine doublets within each analyzable cilium, and ending with the detection of dynein arms of each doublet. The process is concluded by a rough classification of the dynein arms as basis for a computer-assisted diagnosis. Additionally, the interaction possibilities are designed in a way, that the results are still reproducible given the completion report. Results – A qualitative evaluation showed reasonable detection results for cilia, doublets and dynein arms. However, since a ground truth is missing, the variation of the computer-assisted diagnosis should be within the subjective bias of human diagnosticians. The results of a first quantitative evaluation with five human experts and six images with 12 analyzable cilia showed, that with default parameterization 91.6% of the cilia and 98% of the doublets were found. The computer-assisted approach rated 66% of those inner and outer dynein arms correct, where all human experts agree. However, especially the quality of the dynein arm classification may be improved in future work. KW - Image analysis KW - Image processing KW - Computer aided diagnosis and therapy KW - Image classification KW - Image segmentation KW - Biopsy KW - Electron microscopy KW - Zilie KW - Ultrastruktur KW - Anomalie KW - Bildverarbeitung KW - Objekterkennung KW - Computerunterstütztes Verfahren Y1 - 2016 U6 - https://doi.org/10.1117/12.2214976 ER - TY - CHAP A1 - Palm, Christoph A1 - Graeme, Penny P. A1 - Crum, William R. A1 - Schnabel, Julia A. A1 - Pietrzyk, Uwe A1 - Hawkes, David J. T1 - Fusion of Rat Brain Histology and MRI using Weighted Multi-Image Mutual Information T2 - Proceedings of the SPIE Medical Imaging 6914: Image Processing 69140M N2 - Fusion of histology and MRI is frequently demanded in biomedical research to study in vitro tissue properties in an in vivo reference space. Distortions and artifacts caused by cutting and staining of histological slices as well as differences in spatial resolution make even the rigid fusion a difficult task. State-of- the-art methods start with a mono-modal restacking yielding a histological pseudo-3D volume. The 3D information of the MRI reference is considered subsequently. However, consistency of the histology volume and consistency due to the corresponding MRI seem to be diametral goals. Therefore, we propose a novel fusion framework optimizing histology/histology and histology/MRI consistency at the same time finding a balance between both goals. Method - Direct slice-to-slice correspondence even in irregularly-spaced cutting sequences is achieved by registration-based interpolation of the MRI. Introducing a weighted multi-image mutual information metric (WI), adjacent histology and corresponding MRI are taken into account at the same time. Therefore, the reconstruction of the histological volume as well as the fusion with the MRI is done in a single step. Results - Based on two data sets with more than 110 single registrations in all, the results are evaluated quantitatively based on Tanimoto overlap measures and qualitatively showing the fused volumes. In comparison to other multi-image metrics, the reconstruction based on WI is significantly improved. We evaluated different parameter settings with emphasis on the weighting term steering the balance between intra- and inter-modality consistency. KW - Magnetic resonance imaging KW - Image registration KW - Brain KW - 3D image processing KW - Image fusion KW - In vitro testing KW - In vivo imaging KW - Kernspintomografie KW - Histologie KW - Schnittdarstellung KW - Registrierung KW - Datenfusion Y1 - 2008 U6 - https://doi.org/10.1117/12.770605 IS - 6914 SP - 69140M-1 EP - 69140M-9 ER - TY - CHAP A1 - Middel, Luise A1 - Palm, Christoph A1 - Erdt, Marius T1 - Synthesis of Medical Images Using GANs T2 - Uncertainty for safe utilization of machine learning in medical imaging and clinical image-based procedures. First International Workshop, UNSURE 2019, and 8th International Workshop, CLIP 2019, held in conjunction with MICCAI 2019, Shenzhen, China, October 17, 2019 N2 - The success of artificial intelligence in medicine is based on the need for large amounts of high quality training data. Sharing of medical image data, however, is often restricted by laws such as doctor-patient confidentiality. Although there are publicly available medical datasets, their quality and quantity are often low. Moreover, datasets are often imbalanced and only represent a fraction of the images generated in hospitals or clinics and can thus usually only be used as training data for specific problems. The introduction of generative adversarial networks (GANs) provides a mean to generate artificial images by training two convolutional networks. This paper proposes a method which uses GANs trained on medical images in order to generate a large number of artificial images that could be used to train other artificial intelligence algorithms. This work is a first step towards alleviating data privacy concerns and being able to publicly share data that still contains a substantial amount of the information in the original private data. The method has been evaluated on several public datasets and quantitative and qualitative tests showing promising results. KW - Neuronale Netze KW - Deep Learning KW - Generative adversarial networks KW - Machine Learning KW - Artificial Intelligence KW - Data privacy KW - Deep Learning KW - Bilderzeugung KW - Datenschutz Y1 - 2019 SN - 978-3-030-32688-3 U6 - https://doi.org/10.1007/978-3-030-32689-0_13 SN - 0302-9743 SP - 125 EP - 134 PB - Springer Nature CY - Cham ER - TY - CHAP A1 - Franz, Daniela A1 - Dreher, Maria A1 - Prinzen, Martin A1 - Teßmann, Matthias A1 - Palm, Christoph A1 - Katzky, Uwe A1 - Perret, Jerome A1 - Hofer, Mathias A1 - Wittenberg, Thomas T1 - CT-basiertes virtuelles Fräsen am Felsenbein BT - Bild- und haptischen Wiederholfrequenzen bei unterschiedlichen Rendering Methoden T2 - Bildverarbeitung für die Medizin 2018; Algorithmen - Systeme - Anwendungen. Proceedings des Workshops vom 11. bis 13. März 2018 in Erlangen N2 - Im Rahmen der Entwicklung eines haptisch-visuellen Trainingssystems für das Fräsen am Felsenbein werden ein Haptikarm und ein autostereoskopischer 3D-Monitor genutzt, um Chirurgen die virtuelle Manipulation von knöchernen Strukturen im Kontext eines sog. Serious Game zu ermöglichen. Unter anderem sollen Assistenzärzte im Rahmen ihrer Ausbildung das Fräsen am Felsenbein für das chirurgische Einsetzen eines Cochlea-Implantats üben können. Die Visualisierung des virtuellen Fräsens muss dafür in Echtzeit und möglichst realistisch modelliert, implementiert und evaluiert werden. Wir verwenden verschiedene Raycasting Methoden mit linearer und Nearest Neighbor Interpolation und vergleichen die visuelle Qualität und die Bildwiederholfrequenzen der Methoden. Alle verglichenen Verfahren sind sind echtzeitfähig, unterscheiden sich aber in ihrer visuellen Qualität. KW - Felsenbein KW - Fräsen KW - Virtualisierung KW - Computertomographie KW - Computerassistierte Chirurgie Y1 - 2018 SN - 978-3-662-56537-7 U6 - https://doi.org/10.1007/978-3-662-56537-7_51 SP - 176 EP - 181 PB - Springer CY - Berlin ER - TY - JOUR A1 - Ebigbo, Alanna A1 - Palm, Christoph A1 - Messmann, Helmut T1 - Barrett esophagus: What to expect from Artificial Intelligence? JF - Best Practice & Research Clinical Gastroenterology N2 - The evaluation and assessment of Barrett’s esophagus is challenging for both expert and nonexpert endoscopists. However, the early diagnosis of cancer in Barrett’s esophagus is crucial for its prognosis, and could save costs. Pre-clinical and clinical studies on the application of Artificial Intelligence (AI) in Barrett’s esophagus have shown promising results. In this review, we focus on the current challenges and future perspectives of implementing AI systems in the management of patients with Barrett’s esophagus. KW - Deep Learning KW - Künstliche Intelligenz KW - Computerunterstützte Medizin KW - Barrett KW - Adenocarcinoma KW - Artificial intelligence KW - Deep learning KW - Convolutional neural networks Y1 - 2021 U6 - https://doi.org/10.1016/j.bpg.2021.101726 SN - 1521-6918 VL - 52-53 IS - June-August PB - Elsevier ER - TY - JOUR A1 - Maier, Andreas A1 - Deserno, Thomas M. A1 - Handels, Heinz A1 - Maier-Hein, Klaus H. A1 - Palm, Christoph A1 - Tolxdorff, Thomas T1 - Guest editorial of the IJCARS - BVM 2018 special issue JF - International Journal of Computer Assisted Radiology and Surgery KW - Medical Image Computing Y1 - 2019 U6 - https://doi.org/10.1007/s11548-018-01902-0 VL - 14 SP - 1 EP - 2 PB - Springer ER - TY - JOUR A1 - Arribas, Julia A1 - Antonelli, Giulio A1 - Frazzoni, Leonardo A1 - Fuccio, Lorenzo A1 - Ebigbo, Alanna A1 - van der Sommen, Fons A1 - Ghatwary, Noha A1 - Palm, Christoph A1 - Coimbra, Miguel A1 - Renna, Francesco A1 - Bergman, Jacques J.G.H.M. A1 - Sharma, Prateek A1 - Messmann, Helmut A1 - Hassan, Cesare A1 - Dinis-Ribeiro, Mario J. T1 - Standalone performance of artificial intelligence for upper GI neoplasia: a meta-analysis JF - Gut N2 - Objective: Artificial intelligence (AI) may reduce underdiagnosed or overlooked upper GI (UGI) neoplastic and preneoplastic conditions, due to subtle appearance and low disease prevalence. Only disease-specific AI performances have been reported, generating uncertainty on its clinical value. Design: We searched PubMed, Embase and Scopus until July 2020, for studies on the diagnostic performance of AI in detection and characterisation of UGI lesions. Primary outcomes were pooled diagnostic accuracy, sensitivity and specificity of AI. Secondary outcomes were pooled positive (PPV) and negative (NPV) predictive values. We calculated pooled proportion rates (%), designed summary receiving operating characteristic curves with respective area under the curves (AUCs) and performed metaregression and sensitivity analysis. Results: Overall, 19 studies on detection of oesophageal squamous cell neoplasia (ESCN) or Barrett's esophagus-related neoplasia (BERN) or gastric adenocarcinoma (GCA) were included with 218, 445, 453 patients and 7976, 2340, 13 562 images, respectively. AI-sensitivity/specificity/PPV/NPV/positive likelihood ratio/negative likelihood ratio for UGI neoplasia detection were 90% (CI 85% to 94%)/89% (CI 85% to 92%)/87% (CI 83% to 91%)/91% (CI 87% to 94%)/8.2 (CI 5.7 to 11.7)/0.111 (CI 0.071 to 0.175), respectively, with an overall AUC of 0.95 (CI 0.93 to 0.97). No difference in AI performance across ESCN, BERN and GCA was found, AUC being 0.94 (CI 0.52 to 0.99), 0.96 (CI 0.95 to 0.98), 0.93 (CI 0.83 to 0.99), respectively. Overall, study quality was low, with high risk of selection bias. No significant publication bias was found. Conclusion: We found a high overall AI accuracy for the diagnosis of any neoplastic lesion of the UGI tract that was independent of the underlying condition. This may be expected to substantially reduce the miss rate of precancerous lesions and early cancer when implemented in clinical practice. KW - Artificial Intelligence Y1 - 2021 U6 - https://doi.org/10.1136/gutjnl-2020-321922 VL - 70 IS - 8 SP - 1458 EP - 1468 PB - BMJ CY - London ER - TY - INPR A1 - Weiherer, Maximilian A1 - Eigenberger, Andreas A1 - Brébant, Vanessa A1 - Prantl, Lukas A1 - Palm, Christoph T1 - Learning the shape of female breasts: an open-access 3D statistical shape model of the female breast built from 110 breast scans N2 - We present the Regensburg Breast Shape Model (RBSM) – a 3D statistical shape model of the female breast built from 110 breast scans, and the first ever publicly available. Together with the model, a fully automated, pairwise surface registration pipeline used to establish correspondence among 3D breast scans is introduced. Our method is computationally efficient and requires only four landmarks to guide the registration process. In order to weaken the strong coupling between breast and thorax, we propose to minimize the variance outside the breast region as much as possible. To achieve this goal, a novel concept called breast probability masks (BPMs) is introduced. A BPM assigns probabilities to each point of a 3D breast scan, telling how likely it is that a particular point belongs to the breast area. During registration, we use BPMs to align the template to the target as accurately as possible inside the breast region and only roughly outside. This simple yet effective strategy significantly reduces the unwanted variance outside the breast region, leading to better statistical shape models in which breast shapes are quite well decoupled from the thorax. The RBSM is thus able to produce a variety of different breast shapes as independently as possible from the shape of the thorax. Our systematic experimental evaluation reveals a generalization ability of 0.17 mm and a specificity of 2.8 mm for the RBSM. Ultimately, our model is seen as a first step towards combining physically motivated deformable models of the breast and statistical approaches in order to enable more realistic surgical outcome simulation. KW - Statistical shape mode KW - Surgical outcome simulation KW - 3D breast scan registration KW - Non-rigid surface registration KW - Breast imaging Y1 - 2021 ER - TY - JOUR A1 - Palm, Christoph A1 - Axer, Markus A1 - Gräßel, David A1 - Dammers, Jürgen A1 - Lindemeyer, Johannes A1 - Zilles, Karl A1 - Pietrzyk, Uwe A1 - Amunts, Katrin T1 - Towards ultra-high resolution fibre tract mapping of the human brain BT - registration of polarised light images and reorientation of fibre vectors JF - Frontiers in Human Neuroscience N2 - Polarised light imaging (PLI) utilises the birefringence of the myelin sheaths in order to visualise the orientation of nerve fibres in microtome sections of adult human post-mortem brains at ultra-high spatial resolution. The preparation of post-mortem brains for PLI involves fixation, freezing and cutting into 100-μm-thick sections. Hence, geometrical distortions of histological sections are inevitable and have to be removed for 3D reconstruction and subsequent fibre tracking. We here present a processing pipeline for 3D reconstruction of these sections using PLI derived multimodal images of post-mortem brains. Blockface images of the brains were obtained during cutting; they serve as reference data for alignment and elimination of distortion artefacts. In addition to the spatial image transformation, fibre orientation vectors were reoriented using the transformation fields, which consider both affine and subsequent non-linear registration. The application of this registration and reorientation approach results in a smooth fibre vector field, which reflects brain morphology. PLI combined with 3D reconstruction and fibre tracking is a powerful tool for human brain mapping. It can also serve as an independent method for evaluating in vivo fibre tractography. KW - Bildgebendes Verfahren KW - Dreidimensionale Bildverarbeitung KW - Polarisiertes Licht KW - Gehirnkarte Y1 - 2010 U6 - https://doi.org/10.3389/neuro.09.009.2010 VL - 4 ER - TY - CHAP A1 - Weiherer, Maximilian A1 - Zorn, Martin A1 - Wittenberg, Thomas A1 - Palm, Christoph ED - Tolxdorff, Thomas ED - Deserno, Thomas M. ED - Handels, Heinz ED - Maier, Andreas ED - Maier-Hein, Klaus H. ED - Palm, Christoph T1 - Retrospective Color Shading Correction for Endoscopic Images T2 - Bildverarbeitung für die Medizin 2020. Algorithmen - Systeme - Anwendungen. Proceedings des Workshops vom 15. bis 17. März 2020 in Berlin N2 - In this paper, we address the problem of retrospective color shading correction. An extension of the established gray-level shading correction algorithm based on signal envelope (SE) estimation to color images is developed using principal color components. Compared to the probably most general shading correction algorithm based on entropy minimization, SE estimation does not need any computationally expensive optimization and thus can be implemented more effciently. We tested our new shading correction scheme on artificial as well as real endoscopic images and observed promising results. Additionally, an indepth analysis of the stop criterion used in the SE estimation algorithm is provided leading to the conclusion that a fixed, user-defined threshold is generally not feasible. Thus, we present new ideas how to develop a non-parametric version of the SE estimation algorithm using entropy. KW - Endoskopie KW - Bildgebendes Verfahren KW - Farbenraum KW - Graustufe Y1 - 2020 SN - 978-3-658-29266-9 U6 - https://doi.org/10.1007/978-3-658-29267-6 SP - 14 EP - 19 PB - Springer Vieweg CY - Wiesbaden ER - TY - CHAP A1 - Szalo, Alexander Eduard A1 - Zehner, Alexander A1 - Palm, Christoph T1 - GraphMIC: Medizinische Bildverarbeitung in der Lehre T2 - Bildverarbeitung für die Medizin 2015; Algorithmen - Systeme - Anwendungen; Proceedings des Workshops vom 15. bis 17. März 2015 in Lübeck N2 - Die Lehre der medizinischen Bildverarbeitung vermittelt Kenntnisse mit einem breiten Methodenspektrum. Neben den Grundlagen der Verfahren soll ein Gefühl für eine geeignete Ausführungsreihenfolge und ihrer Wirkung auf medizinische Bilddaten entwickelt werden. Die Komplexität der Methoden erfordert vertiefte Programmierkenntnisse, sodass bereits einfache Operationen mit großem Programmieraufwand verbunden sind. Die Software GraphMIC stellt Bildverarbeitungsoperationen in Form interaktiver Knoten zur Verfügung und erlaubt das Arrangieren, Parametrisieren und Ausführen komplexer Verarbeitungssequenzen in einem Graphen. Durch den Fokus auf das Design einer Pipeline, weg von sprach- und frameworkspezifischen Implementierungsdetails, lassen sich grundlegende Prinzipien der Bildverarbeitung anschaulich erlernen. In diesem Beitrag stellen wir die visuelle Programmierung mit GraphMIC der nativen Implementierung äquivalenter Funktionen gegenüber. Die in C++ entwickelte Applikation basiert auf Qt, ITK, OpenCV, VTK und MITK. KW - Bildverarbeitung KW - Medizin KW - Hochschuldidaktik Y1 - 2015 U6 - https://doi.org/10.1007/978-3-662-46224-9_68 SP - 395 EP - 400 PB - Springer CY - Berlin ER - TY - CHAP A1 - Palm, Christoph A1 - Metzler, V. A1 - Moham, B. A1 - Dieker, O. A1 - Lehmann, Thomas M. A1 - Spitzer, Klaus ED - Evers, H. ED - Glombitza, G. ED - Lehmann, Thomas M. ED - Meinzer, H.-P. T1 - Co-Occurrence Matrizen zur Texturklassifikation in Vektorbildern T2 - Bildverarbeitung für die Medizin N2 - Statistische Eigenschaften natürlicher Grauwerttexturen werden mit Co-Occurrence Matrizen, basierend auf der Grauwertstatistik zweiter Ordnung, modelliert. Die Matrix gibt dann die apriori Wahrscheinlichkeiten aller Grauwertpaare an. Da in der medizinischen Bildverarbeitung verstärkt Multispektralbilder ausgewertet werden, wird das bekannte Konzept hier auf beliebige Vektorbilder erweitert. Dadurch kann bei der Texturklassifikation die zur Verfügung stehende Information vollständig genutzt werden. Insbesondere zur Detektion von Farbtexturen ist dieser Ansatz geeignet, da Wertepaare unterschiedlicher Spektralebenen ausgewertet werden können. Ebenso kann die Methode auch bei der Multiskalendekomposition von Intensitätsbildern zur Verbesserung der Texturerkennung beitragen. Die in den Matrizen entstehenden Muster lassen dann über die Extraktion geeigneter Texturdeskriptoren Rückschlüsse auf die Texturen des Bildes zu. KW - Texturerkennung KW - Vektorbilder KW - Multispektralbilder KW - Multiskalenbilder KW - Klassifikation Y1 - 1999 U6 - https://doi.org/10.1007/978-3-642-60125-5_69 SP - 367 EP - 371 PB - Springer CY - Berlin ER - TY - CHAP A1 - Wöhl, Rebecca A1 - Huber, Michaela A1 - Loibl, Markus A1 - Riebschläger, Birgit A1 - Nerlich, Michael A1 - Palm, Christoph T1 - The Impact of Semi-Automated Segmentation and 3D Analysis on Testing New Osteosynthesis Material T2 - Bildverarbeitung für die Medizin 2017; Algorithmen - Systeme - Anwendungen; Proceedings des Workshops vom 12. bis 14. März 2017 in Heidelberg N2 - A new protocol for testing osteosynthesis material postoperatively combining semi-automated segmentation and 3D analysis of surface meshes is proposed. By various steps of transformation and measuring, objective data can be collected. In this study the specifications of a locking plate used for mediocarpal arthrodesis of the wrist were examined. The results show, that union of the lunate, triquetrum, hamate and capitate was achieved and that the plate is comparable to coexisting arthrodesis systems. Additionally, it was shown, that the complications detected correlate to the clinical outcome. In synopsis, this protocol is considered beneficial and should be taken into account in further studies. KW - Osteosynthese KW - Implantatwerkstoff KW - Materialprüfung KW - Bildsegmentierung KW - Dreidimensionale Bildverarbeitung Y1 - 2017 U6 - https://doi.org/10.1007/978-3-662-54345-0_30 SP - 122 EP - 127 PB - Springer CY - Berlin ER - TY - CHAP A1 - Palm, Christoph A1 - Scholl, Ingrid A1 - Lehmann, Thomas M. A1 - Spitzer, Klaus ED - Lehmann, Thomas M. ED - Metzler, V. ED - Spitzer, Klaus ED - Tolxdorff, Thomas T1 - Quantitative Farbmessung in laryngoskopischen Bildern T2 - Bildverarbeitung für die Medizin N2 - Quantitative Farbmessungen sollen die Diagnostik laryngealer Erkrankungen unterstützen. Dabei wird der Farbeindruck nicht nur durch die Reflexionseigenschaften des Gewebes sondern auch durch die Farbe der verwendeten Lichtquelle beeinflußt. Der hier vorgestellte Farbkonstanz-Algorithmus basiert auf dem dichromatischen Reflexionsmodell und liefert eine pixelweise Trennung des Farbbildes in seine beiden Faxbanteile. Die Körperfarbe entspricht dabei der gewebespezifischen Reflexion, die Oberfächenfarbe der Strahlung der Lichtquelle. KW - Farbkonstanz KW - quantitative Farbmessung KW - dichromatisches Reflexionsmodell KW - Laryngoskopie Y1 - 1998 U6 - https://doi.org/10.1007/978-3-642-58775-7_81 SP - 412 EP - 416 PB - Springer CY - Berlin ER - TY - GEN ED - Lehmann, Thomas M. ED - Palm, Christoph ED - Spitzer, Klaus ED - Tolxdorff, Thomas T1 - Advances in Quantitative Laryngoscopy, Voice and Speech Research, Procs. 3rd International Workshop, RWTH Aachen Y1 - 1998 CY - Aachen ER - TY - THES A1 - Maier, Johannes T1 - Entwicklung eines Haptisch und Visuell unterstützten Trainingssystems (HaptiVisT) für komplexe Knochenbohrungen in der minimalinvasiven Handchirurgie N2 - Eine gängige Operationsmethode, um Frakturen der menschlichen Hand nach einem Unfall zu korrigieren, ist eine Osteosynthese mit sogenannten Kirschnerdrähten (K-Drähten) zur Stabilisierung von Knochenfragmenten. Die Einführung dieser langen, dünnen und scharfen Drähte durch manuelles Bohren ist eine komplexe minimalinvasive Operation, bei der ein Chirurg nahezu ohne visuelle Orientierung und nur durch eine kleine Öffnung der Haut des Patienten arbeitet. Als Orientierungshilfe für die optimale Lage der K-Drähte bleibt dem Chirurgen lediglich eine zweidimensionale (2D)-Röntgendarstellung und das Ertasten von knöchernen Vorsprüngen auf der menschlichen Hand, um Verletzungen an Risikostrukturen (Nerven, Gefäße usw.), die im Weichteilgewebe der Hand eingebettet sind, zu vermeiden. Für eine sichere und fehlerfreie Durchführung einer K-Draht-Osteosynthese ist deswegen eine gründliche theoretische und praktische Ausbildung junger Chirurgen notwendig. Da traditionelle Trainingsmethoden zeitaufwendig, kostspielig, ethisch nicht korrekt und unzureichend realistisch sind, wird in dieser Arbeit ein innovativer, auf virtueller Realität (VR) basierender, Haptisch und Visuell unterstützter Trainingssimulator (HaptiVisT) für die Platzierung von K-Drähten entwickelt, der vor allem Handchirurgen mit Übungs- und Perfektionierungsbedarf dabei unterstützt, das Bohrverfahren in einer realistischen aber virtuellen Umgebung zu erlernen. Beim HaptiVisT-Prototypenaufbau werden reale Patientendaten segmentierter Volumendaten aus einer Computertomographie (CT) und einer Magnetresonanztomographie (MRT) im virtuellen, dreidimensionalen (3D) Raum auf einem 3D-Monitor visualisiert und für eine intuitive bimanuelle Haptik sowohl mit einem Kraftfeedback-Gerät für den Bohrprozess und einer 3D-gedruckten und optisch getrackten Phantomhand kombiniert. Die vorliegende Arbeit beschreibt zunächst alle verwendeten Hardwaregeräte, die C++-Softwareumgebung, aufgebaut auf Multithreading (gleichzeitige Ausführung mehrerer Anweisungsfolgen in einem Prozess), und die auf Oberflächen- und Volumenrendering basierte Visualisierung. Die Kollisionsdetektion zwischen Bohrer und Knochen im virtuellen Raum wird in zwei separate Ereignisse unterteilt: Kollisionen zwischen Objekten als Gesamtes (Simulation der Kollision über die gesamte Objektoberfläche) und Kollisionen zwischen einer K-Draht-Spitze und dem Knochenvolumen für die Entfernung kleiner Volumenelemente (Voxel). Das Herzstück des Trainingssystems bildet eine echtzeitfähige Bohrsimulation, die den gesamten Bohrprozess in eine endliche Anzahl logischer Unterprozesse gliedert und diese Zustände in einen endlichen Zustandsautomaten (FSM, engl.: Finite State Machine) zusammenfasst. Das Kraftfeedback während einer Bohrung wird mit sogenannten „Virtual Fixtures“ (abstrakten sensorischen Informationen) berechnet und über einen Haptikarm auf den Benutzer übertragen. Damit die Simulation der Realität entspricht, wird unter Zuhilfenahme eines experimentellen Aufbaus die reale Bohrgeschwindigkeit durch kortikale Knochen ermittelt. Anschließend werden ein Levelkonzept und alle im System verfügbaren Bohrunterstützungswerkzeuge, wie haptische Korridore als Goldstandard oder eine Röntgenbildsimulation, vorgestellt. Mit ihnen ist es möglich, ausgesuchte Operationsfälle in Level unterschiedlicher Schwierigkeit zu unterteilen und den Operationsvorgang qualitativ zu bewerten. Der 3D-Druck einer Phantomhand (realitätsnahe Nachbildung einer Patientenhand) mit realistischen haptischen Eigenschaften zum Ertasten von Knochenvorsprüngen wird über einen metamaterialbasierten Ansatz (Neuanordnung des Grundmaterials durch eine künstlich angelegte, sich wiederholende Struktur) realisiert, da das aktuell am Markt verfügbare 3D-Druckmaterial für den Druck menschlichen Weichteilgewebes zu hart ist. Die Echtzeitverfolgung der Phantomhand beruht auf einem mit einer Stereokamera optisch getrackten Marker in Form eines Dodekaeders (Körper mit zwölf Flächen). Abschließend wird das HaptiVisT-Gesamtsystem in drei und der 3D-Druck einer Phantomhand in zwei Expertenevaluationen ausführlich untersucht und ausgewertet. Das HaptiVisT-System versteht sich als notwendiges Komplement für den ersten und weltweit einzigen funktionsfähigen, kompakten Prototypen für virtuelle K-DrahtOsteosynthesen mit haptischen Kraftfeedback, der in Zukunft Chirurgen in Aus- und Weiterbildung an Kliniken oder Trainingszentren ein risikofreies, zeit- und ortsunabhängiges Training ermöglicht. Die Kernelemente dieser Arbeit sind: • Stereoskopische 3D-Darstellung von realen Patientendaten. • Bimanuelle Haptik aus haptischen Kraftfeedback des Bohrens verbunden mit einer optisch getrackten, haptisch korrekten und 3D-gedruckten Phantomhand. • Zuverlässige Kollisionsdetektion zwischen virtuellen Objekten als Grundlage für das Kraftfeedback und die Abtragung von Knochen. • Echtzeitfähige Bohrsimulation durch Reduzierung des Bohrprozesses auf logische Bohr-Teilprozesse kombiniert mit einer Virtual Fixtures basierten Kraftberechnung. Haptisch korrekte Phantome sind vor allem im medizinischen Training von hoher Relevanz und die Berechnung des Kraftfeedbacks beruht erstmals auf der performanten und stabilen Simulation von Bohr-Teilprozessen unter Verwendung von Virtual Fixtures. Der Prototyp wird von Experten durchgehend positiv bewertet und bietet nach deren Einschätzung einen hohen Mehrwert für das chirurgische Training. Zukünftige Arbeiten könnten den Lerneffekt durch das HaptiVisT-Trainingssystem in stichhaltigen Evaluationen mit jungen Medizinstudenten unter Vorhandensein einer Kontrollgruppe statistisch validieren. Bei Bestätigung dieses Lerneffekts ist eine Ausgründung als eigenständiges Unternehmen und Weiterentwicklung des Prototyps mit Ausweitung auf weitere chirurgische Bereiche wie Knie- oder Hüftchirurgie denkbar. Unter Zuhilfenahme von automatischer Segmentierung könnten in Zukunft akut zu behandelnde Brüche abgebildet, vorab einer tatsächlichen Operation geübt und anschließend komplikationslos unter reduzierter Operationszeit durchgeführt werden. KW - Medizinisches Trainingssystem KW - Virtual Reality KW - Minimalinvasive Handchirurgie Y1 - 2020 SN - 978-3-8440-7547-2 N1 - Titel verleihende Institution: Universität Regensburg PB - Shaker CY - Düren ER - TY - JOUR A1 - Wöhl, Rebecca A1 - Maier, Johannes A1 - Gehmert, Sebastian A1 - Palm, Christoph A1 - Riebschläger, Birgit A1 - Nerlich, Michael A1 - Huber, Michaela T1 - 3D Analysis of Osteosyntheses Material using semi-automated CT Segmentation BT - a case series of a 4 corner fusion plate JF - BMC Musculoskeletal Disorders N2 - Backround Scaphoidectomy and midcarpal fusion can be performed using traditional fixation methods like K-wires, staples, screws or different dorsal (non)locking arthrodesis systems. The aim of this study is to test the Aptus four corner locking plate and to compare the clinical findings to the data revealed by CT scans and semi-automated segmentation. Methods: This is a retrospective review of eleven patients suffering from scapholunate advanced collapse (SLAC) or scaphoid non-union advanced collapse (SNAC) wrist, who received a four corner fusion between August 2011 and July 2014. The clinical evaluation consisted of measuring the range of motion (ROM), strength and pain on a visual analogue scale (VAS). Additionally, the Disabilities of the Arm, Shoulder and Hand (QuickDASH) and the Mayo Wrist Score were assessed. A computerized tomography (CT) of the wrist was obtained six weeks postoperatively. After semi-automated segmentation of the CT scans, the models were post processed and surveyed. Results During the six-month follow-up mean range of motion (ROM) of the operated wrist was 60°, consisting of 30° extension and 30° flexion. While pain levels decreased significantly, 54% of grip strength and 89% of pinch strength were preserved compared to the contralateral healthy wrist. Union could be detected in all CT scans of the wrist. While X-ray pictures obtained postoperatively revealed no pathology, two user related technical complications were found through the 3D analysis, which correlated to the clinical outcome. Conclusion Due to semi-automated segmentation and 3D analysis it has been proved that the plate design can keep up to the manufacturers’ promises. Over all, this case series confirmed that the plate can compete with the coexisting techniques concerning clinical outcome, union and complication rate. KW - Handchirurgie KW - Osteosynthese KW - Arthrodese KW - 4FC KW - SLAC wrist KW - SNAC wrist KW - Semi-automated segmentation KW - 3D analysis KW - Computertomographie KW - Bildsegmentierung KW - Dreidimensionale Bildverarbeitung Y1 - 2018 U6 - https://doi.org/10.1186/s12891-018-1975-0 VL - 19 SP - 1 EP - 8 PB - Springer Nature ER - TY - JOUR A1 - Dehnhardt, Markus A1 - Palm, Christoph A1 - Vieten, Andrea A1 - Bauer, Andreas A1 - Pietrzyk, Uwe T1 - Quantifying the A1AR distribution in peritumoral zones around experimental F98 and C6 rat brain tumours JF - Journal of Neuro-Oncology N2 - Quantification of growth in experimental F98 and C6 rat brain tumours was performed on 51 rat brains, 17 of which have been further assessed by 3D tumour reconstruction. Brains were cryosliced and radio-labelled with a ligand of the peripheral type benzodiazepine-receptor (pBR), 3H-Pk11195 [(1-(2-chlorophenyl)-N-methyl-N-(1-methyl-propylene)-3-isoquinoline-carboxamide)] by receptor autoradiography. Manually segmented and automatically registered tumours have been 3D-reconstructed for volumetric comparison on the basis of 3H-Pk11195-based tumour recognition. Furthermore automatically computed areas of −300 μm inner (marginal) zone as well as 300 μm and 600 μm outer tumour space were quantified. These three different regions were transferred onto other adjacent slices that had been labelled by receptor autoradiography with the A1 Adenosine receptor (A1AR)-ligand 3H-CPFPX (3H-8-cyclopentyl-3-(3-fluorpropyl)-1-propylxanthine) for quantitative assessment of A1AR in the three different tumour zones. Hence, a method is described for quantifying various receptor protein systems in the tumour as well as in the marginal invasive zones around experimentally implanted rat brain tumours and their representation in the tumour microenvironment as well as in 3D space. Furthermore, a tool for automatically reading out radio-labelled rat brain slices from auto radiographic films was developed, reconstructed into a consistent 3D-tumour model and the zones around the tumour were visualized. A1AR expression was found to depend upon the tumour volume in C6 animals, but is independent on the time of tumour development. In F98 animals, a significant increase in A1AR receptor protein was found in the Peritumoural zone as a function of time of tumour development and tumour volume. KW - 3D reconstruction KW - A1 adenosine receptor KW - GBM KW - Kmeans algorithm KW - Brain tumour KW - Receptor autoradiography KW - Hirntumor KW - Dreidimensionale Bildverarbeitung KW - Adenosinrezeptor Y1 - 2007 U6 - https://doi.org/10.1007/s11060-007-9391-6 VL - 85 SP - 49 EP - 63 ER - TY - JOUR A1 - Hartmann, Robin A1 - Weiherer, Maximilian A1 - Schiltz, Daniel A1 - Baringer, Magnus A1 - Noisser, Vivien A1 - Hösl, Vanessa A1 - Eigenberger, Andreas A1 - Seitz, Stefan A1 - Palm, Christoph A1 - Prantl, Lukas A1 - Brébant, Vanessa T1 - New aspects in digital breast assessment: further refinement of a method for automated digital anthropometry JF - Archives of Gynecology and Obstetrics N2 - Purpose: In this trial, we used a previously developed prototype software to assess aesthetic results after reconstructive surgery for congenital breast asymmetry using automated anthropometry. To prove the consensus between the manual and automatic digital measurements, we evaluated the software by comparing the manual and automatic measurements of 46 breasts. Methods: Twenty-three patients who underwent reconstructive surgery for congenital breast asymmetry at our institution were examined and underwent 3D surface imaging. Per patient, 14 manual and 14 computer-based anthropometric measurements were obtained according to a standardized protocol. Manual and automatic measurements, as well as the previously proposed Symmetry Index (SI), were compared. Results: The Wilcoxon signed-rank test revealed no significant differences in six of the seven measurements between the automatic and manual assessments. The SI showed robust agreement between the automatic and manual methods. Conclusion: The present trial validates our method for digital anthropometry. Despite the discrepancy in one measurement, all remaining measurements, including the SI, showed high agreement between the manual and automatic methods. The proposed data bring us one step closer to the long-term goal of establishing robust instruments to evaluate the results of breast surgery. KW - digital anthropometry KW - reconstructive surgery KW - 3D surface imaging Y1 - 2021 U6 - https://doi.org/10.1007/s00404-020-05862-2 SN - 1432-0711 VL - 303 SP - 721 EP - 728 PB - Springer Nature CY - Heidelberg ER - TY - CHAP A1 - Pietrzyk, Uwe A1 - Bauer, Dagmar A1 - Vieten, Andrea A1 - Bauer, Andreas A1 - Langen, Karl-Josef A1 - Zilles, Karl A1 - Palm, Christoph T1 - Creating consistent 3D multi-modality data sets from autoradiographic and histological images of the rat brain T2 - IEEE Nuclear Science Symposium Conference Record N2 - Volumetric representations of autoradiographic and histological images gain ever more interest as a base to interpret data obtained with /spl mu/-imaging devices like microPET. Beyond supporting spatial orientation within rat brains especially autoradiographic images may serve as a base to quantitatively evaluate the complex uptake patterns of microPET studies with receptor ligands or tumor tracers. They may also serve for the development of rat brain atlases or data models, which can be explored during further image analysis or simulation studies. In all cases a consistent spatial representation of the rat brain, i.e. its anatomy and the corresponding quantitative uptake pattern, is required. This includes both, a restacking of the individual two-dimensional images and the exact registration of the respective volumes. We propose strategies how these volumes can be created in a consistent way and trying to limit the requirements on the circumstances during data acquisition, i.e. being independent from other sources like video imaging of the block face prior to cutting or high resolution micro-X-ray CT or micro MRI. KW - Neoplasms KW - Data models KW - Image analysis KW - Brain modeling KW - Analytical models KW - Anatomy KW - Data acquisition KW - High-resolution imaging KW - Image resolution KW - Computed tomography Y1 - 2004 U6 - https://doi.org/10.1109/NSSMIC.2004.1466754 VL - 6 SP - 4001 EP - 4003 ER - TY - CHAP A1 - Eixelberger, Thomas A1 - Wittenberg, Thomas A1 - Perret, Jerome A1 - Katzky, Uwe A1 - Simon, Martina A1 - Schmitt-Rüth, Stephanie A1 - Hofer, Mathias A1 - Sorge, M. A1 - Jacob, R. A1 - Engel, Felix B. A1 - Gostian, A. A1 - Palm, Christoph A1 - Franz, Daniela T1 - A haptic model for virtual petrosal bone milling T2 - 17. Jahrestagung der Deutschen Gesellschaft für Computer- und Roboterassistierte Chirurgie (CURAC2018), Tagungsband, 2018, Leipzig, 13.-15. September N2 - Virtual training of bone milling requires realtime and realistic haptics of the interaction between the ”virtual mill” and a ”virtual bone”. We propose an exponential abrasion model between a virtual one and the mill bit and combine it with a coarse representation of the virtual bone and the mill shaft for collision detection using the Bullet Physics Engine. We compare our exponential abrasion model to a widely used linear abrasion model and evaluate it quantitatively and qualitatively. The evaluation results show, that we can provide virtual milling in real-time, with an abrasion behavior similar to that proposed in the literature and with a realistic feeling of five different surgeons. KW - Osteosynthese KW - Simulation KW - Lernprogramm Y1 - 2018 UR - https://www.curac.org/images/advportfoliopro/images/CURAC2018/CURAC 2018 Tagungsband.pdf VL - 17 SP - 214 EP - 219 ER - TY - GEN A1 - Beyer, Thomas A1 - Weigert, Markus A1 - Palm, Christoph A1 - Quick, Harald H. A1 - Müller, Stefan P. A1 - Pietrzyk, Uwe A1 - Vogt, Florian A1 - Martinez, M.J. A1 - Bockisch, Andreas T1 - Towards MR-based attenuation correction for whole-body PET/MR imaging T2 - The Journal of Nuclear Medicine KW - Kernspintomografie KW - Positronen-Emissions-Tomografie KW - Bildgebendes Verfahren KW - Schwächung Y1 - 2006 UR - http://jnm.snmjournals.org/content/47/suppl_1/384P.1.abstract VL - 47 IS - Suppl. 1 SP - 384P ER - TY - GEN ED - Maier, Andreas ED - Deserno, Thomas M. ED - Handels, Heinz ED - Maier-Hein, Klaus H. ED - Palm, Christoph ED - Tolxdorff, Thomas T1 - Bildverarbeitung für die Medizin 2018 BT - Algorithmen - Systeme - Anwendungen. Proceedings des Workshops vom 11. bis 13. März 2018 in Erlangen KW - Bildanalyse KW - Bildverarbeitung KW - Computerunterstützte Medizin KW - Bildgebendes Verfahren Y1 - 2018 U6 - https://doi.org/10.1007/978-3-662-56537-7 PB - Springer CY - Berlin ER - TY - JOUR A1 - Weigert, Markus A1 - Pietrzyk, Uwe A1 - Müller, Stefan P. A1 - Palm, Christoph A1 - Beyer, Thomas T1 - Whole-body PET/CT imaging BT - Combining software- and hardware-based co-registration JF - Zeitschrift für Medizinische Physik N2 - Aim Combined whole-body (WB) PET/CT imaging provides better overall co-registration compared to separate CT and PET. However, in clinical routine local PET-CT mis-registration cannot be avoided. Thus, the reconstructed PET tracer distribution may be biased when using the misaligned CT transmission data for CT-based attenuation correction (CT-AC). We investigate the feasibility of retrospective co-registration techniques to align CT and PET images prior to CT-AC, thus improving potentially the quality of combined PET/CT imaging in clinical routine. Methods First, using a commercial software registration package CT images were aligned to the uncorrected PET data by rigid and non-rigid registration methods. Co-registration accuracy of both alignment approaches was assessed by reviewing the PET tracer uptake patterns (visual, linked cursor display) following attenuation correction based on the original and co-registered CT. Second, we investigated non-rigid registration based on a prototype ITK implementation of the B-spline algorithm on a similar targeted MR-CT registration task, there showing promising results. Results Manual rigid, landmark-based co-registration introduced unacceptable misalignment, in particular in peripheral areas of the whole-body images. Manual, non-rigid landmark-based co-registration prior to CT-AC was successful with minor loco-regional distortions. Nevertheless, neither rigid nor non-rigid automatic co-registration based on the Mutual Information image to image metric succeeded in co-registering the CT and noAC-PET images. In contrast to widely available commercial software registration our implementation of an alternative automated, non-rigid B-spline co-registration technique yielded promising results in this setting with MR-CT data. Conclusion In clinical PET/CT imaging, retrospective registration of CT and uncorrected PET images may improve the quality of the AC-PET images. As of today no validated and clinically viable commercial registration software is in routine use. This has triggered our efforts in pursuing new approaches to a validated, non-rigid co-registration algorithm applicable to whole-body PET/CT imaging of which first results are presented here. This approach appears suitable for applications in retrospective WB-PET/CT alignment. Ziel Kombinierte PET/CT-Bildgebung ermöglicht verbesserte Koregistrierung von PET- und CT-Daten gegenüber separat akquirierten Bildern. Trotzdem entstehen in der klinischen Anwendung lokale Fehlregistrierungen, die zu Fehlern in der rekonstruierten PET- Tracerverteilung führen können, falls die unregistrierten CT-Daten zur Schwächungskorrektur (AC) der Emissionsdaten verwendet werden. Wir untersuchen daher die Anwendung von Bildregistrierungsalgorithmen vor der CT-basierten AC zur Verbesserung der PET-Aufnahmen. Methoden Mittels einer kommerziellen Registrierungssoftware wurden die CT-Daten eines PET/CT- Tomographen durch landmarken- und intensitätsbasierte rigide (starre) und nicht-rigide Registrierungsverfahren räumlich an die unkorrigierten PET-Emissionsdaten angepasst und zur AC verwendet. Zur Bewertung wurden die Tracerverteilungen in den PET-Bildern (vor AC, CT-AC, CT-AC nach Koregistrierung) visuell und mit Hilfe korrelierter Fadenkreuze verglichen. Zusätzlich untersuchten wir die ITK-Implementierung der bekannten B-spline basierten, nicht-rigiden Registrierungsansätze im Hinblick auf ihre Verwendbarkeit für die multimodale PET/CT-Ganzkörperregistrierung. Ergebnisse Mittels landmarkenbasierter, nicht-rigider Registrierung konnte die Tracerverteilung in den PET-Daten lokal verbessert werden. Landmarkenbasierte rigide Registrierung führte zu starker Fehlregistrierung in entfernten Körperregionen. Automatische rigide und nicht-rigide Registrierung unter Verwendung der Mutual-Information-Ähnlichkeitsmetrik versagte auf allen verwendeten Datensätzen. Die automatische Registrierung mit B-spline-Funktionen zeigte vielversprechende Resultate in der Anwendung auf einem ähnlich gelagerten CT–MR-Registrierungsproblem. Fazit Retrospektive, nicht-rigide Registrierung unkorrigierter PET- und CT-Aufnahmen aus kombinierten Aufnahmensystemen vor der AC kann die Qualität von PET-Aufnahmen im klinischen Einsatz verbessern. Trotzdem steht bis heute im klinischen Alltag keine validierte, automatische Registrierungssoftware zur Verfügung. Wir verfolgen dazu Ansätze für validierte, nicht-rigide Bildregistrierung für den klinischen Einsatz und präsentieren erste Ergebnisse. KW - PET/CT KW - combined imaging KW - image co-registration KW - attenuation KW - correction KW - Positronen-Emissions-Tomografie KW - Computertomografie KW - Bildgebendes Verfahren KW - Registrierung KW - Schwächung Y1 - 2008 U6 - https://doi.org/10.1016/j.zemedi.2007.07.004 VL - 18 IS - 1 SP - 59 EP - 66 ER - TY - JOUR A1 - Brown, Peter A1 - Consortium, RELISH A1 - Zhou, Yaoqi A1 - Palm, Christoph T1 - Large expert-curated database for benchmarking document similarity detection in biomedical literature search JF - Database N2 - Document recommendation systems for locating relevant literature have mostly relied on methods developed a decade ago. This is largely due to the lack of a large offline gold-standard benchmark of relevant documents that cover a variety of research fields such that newly developed literature search techniques can be compared, improved and translated into practice. To overcome this bottleneck, we have established the RElevant LIterature SearcH consortium consisting of more than 1500 scientists from 84 countries, who have collectively annotated the relevance of over 180 000 PubMed-listed articles with regard to their respective seed (input) article/s. The majority of annotations were contributed by highly experienced, original authors of the seed articles. The collected data cover 76% of all unique PubMed Medical Subject Headings descriptors. No systematic biases were observed across different experience levels, research fields or time spent on annotations. More importantly, annotations of the same document pairs contributed by different scientists were highly concordant. We further show that the three representative baseline methods used to generate recommended articles for evaluation (Okapi Best Matching 25, Term Frequency–Inverse Document Frequency and PubMed Related Articles) had similar overall performances. Additionally, we found that these methods each tend to produce distinct collections of recommended articles, suggesting that a hybrid method may be required to completely capture all relevant articles. The established database server located at https://relishdb.ict.griffith.edu.au is freely available for the downloading of annotation data and the blind testing of new methods. We expect that this benchmark will be useful for stimulating the development of new powerful techniques for title and title/abstract-based search engines for relevant articles in biomedical research. KW - Information Retrieval KW - Indexierung KW - Literaturdatenbank KW - Dokument KW - Ähnlichkeitssuche KW - Suchmaschine Y1 - 2019 U6 - https://doi.org/10.1093/database/baz085 VL - 2019 SP - 1 EP - 66 PB - Oxford University Pres ER - TY - JOUR A1 - Becker, Johanna Sabine A1 - Zoriy, Miroslav A1 - Matusch, Andreas A1 - Wu, Bei A1 - Salber, Dagmar A1 - Palm, Christoph A1 - Becker, Julia Susanne T1 - Bioimaging of Metals by Laser Ablation Inductively Coupled Plasma Mass Spectrometry (LA-ICP-MS) JF - Mass Spectrometry Reviews N2 - The distribution analysis of (essential, beneficial, or toxic) metals (e.g., Cu, Fe, Zn, Pb, and others), metalloids, and non‐metals in biological tissues is of key interest in life science. Over the past few years, the development and application of several imaging mass spectrometric techniques has been rapidly growing in biology and medicine. Especially, in brain research metalloproteins are in the focus of targeted therapy approaches of neurodegenerative diseases such as Alzheimer's and Parkinson's disease, or stroke, or tumor growth. Laser ablation inductively coupled plasma mass spectrometry (LA‐ICP‐MS) using double‐focusing sector field (LA‐ICP‐SFMS) or quadrupole‐based mass spectrometers (LA‐ICP‐QMS) has been successfully applied as a powerful imaging (mapping) technique to produce quantitative images of detailed regionally specific element distributions in thin tissue sections of human or rodent brain. Imaging LA‐ICP‐QMS was also applied to investigate metal distributions in plant and animal sections to study, for example, the uptake and transport of nutrient and toxic elements or environmental contamination. The combination of imaging LA‐ICP‐MS of metals with proteomic studies using biomolecular mass spectrometry identifies metal‐containing proteins and also phosphoproteins. Metal‐containing proteins were imaged in a two‐dimensional gel after electrophoretic separation of proteins (SDS or Blue Native PAGE). Recent progress in LA‐ICP‐MS imaging as a stand‐alone technique and in combination with MALDI/ESI‐MS for selected life science applications is summarized. KW - Bildgebendes Verfahren KW - ICP-Massenspektrometrie KW - Metalle KW - Metallproteide KW - Elektrophorese KW - Gehirnkarte KW - Bioimaging of metals KW - Laser ablation inductively coupled plasma mass spectrometry KW - metal distribution KW - metallomics KW - neurodegenerative diseases Y1 - 2010 U6 - https://doi.org/10.1002/mas.20239 VL - 29 SP - 156 EP - 175 ER - TY - JOUR A1 - Axer, Markus A1 - Amunts, Katrin A1 - Gräßel, David A1 - Palm, Christoph A1 - Dammers, Jürgen A1 - Axer, Hubertus A1 - Pietrzyk, Uwe A1 - Zilles, Karl T1 - Novel Approach to the Human Connectome BT - Ultra-High Resolution Mapping of Fiber Tracts in the Brain JF - NeuroImage N2 - Signal transmission between different brain regions requires connecting fiber tracts, the structural basis of the human connectome. In contrast to animal brains, where a multitude of tract tracing methods can be used, magnetic resonance (MR)-based diffusion imaging is presently the only promising approach to study fiber tracts between specific human brain regions. However, this procedure has various inherent restrictions caused by its relatively low spatial resolution. Here, we introduce 3D-polarized light imaging (3D-PLI) to map the three-dimensional course of fiber tracts in the human brain with a resolution at a submillimeter scale based on a voxel size of 100 μm isotropic or less. 3D-PLI demonstrates nerve fibers by utilizing their intrinsic birefringence of myelin sheaths surrounding axons. This optical method enables the demonstration of 3D fiber orientations in serial microtome sections of entire human brains. Examples for the feasibility of this novel approach are given here. 3D-PLI enables the study of brain regions of intense fiber crossing in unprecedented detail, and provides an independent evaluation of fiber tracts derived from diffusion imaging data. KW - Connectome KW - Human brain KW - Method KW - Polarized light imaging KW - Tractography KW - Systems biology KW - Bildgebendes Verfahren KW - Dreidimensionale Bildverarbeitung KW - Polarisiertes Licht KW - Gehirnkarte Y1 - 2011 U6 - https://doi.org/10.1016/j.neuroimage.2010.08.075 VL - 54 IS - 2 SP - 1091 EP - 1101 ER - TY - JOUR A1 - Bauer, Dagmar A1 - Hamacher, Kurt A1 - Bröer, Stefan A1 - Pauleit, Dirk A1 - Palm, Christoph A1 - Zilles, Karl A1 - Coenen, Heinz H. A1 - Langen, Karl-Josef T1 - Preferred stereoselective brain uptake of D-serine BT - a modulator of glutamatergic neurotransmission JF - Nuclear Medicine and Biology N2 - Although it has long been presumed that d-amino acids are uncommon in mammalians, substantial amounts of free d-serine have been detected in the mammalian brain. d-Serine has been demonstrated to be an important modulator of glutamatergic neurotransmission and acts as an agonist at the strychnine-insensitive glycine site of N-methyl-d-aspartate receptors. The blood-to-brain transfer of d-serine is thought to be extremely low, and it is assumed that d-serine is generated by isomerization of l-serine in the brain. Stimulated by the observation of a preferred transport of the d-isomer of proline at the blood–brain barrier, we investigated the differential uptake of [3H]-d-serine and [3H]-l-serine in the rat brain 1 h after intravenous injection using quantitative autoradiography. Surprisingly, brain uptake of [3H]-d-serine was significantly higher than that of [3H]-l-serine, indicating a preferred transport of the d-enantiomer of serine at the blood–brain barrier. This finding indicates that exogenous d-serine may have a direct influence on glutamatergic neurotransmission and associated diseases. KW - Aminosäuren KW - Gehirn KW - Blut-Hirn-Schranke KW - Aufnahme KW - d/l-serine KW - Amino acid transport KW - Blood–brain barrier KW - NMDA receptors Y1 - 2005 U6 - https://doi.org/10.1016/j.nucmedbio.2005.07.004 VL - 32 IS - 8 SP - 793 EP - 797 ER - TY - JOUR A1 - Ott, Tankred A1 - Palm, Christoph A1 - Vogt, Robert A1 - Oberprieler, Christoph T1 - GinJinn: An object-detection pipeline for automated feature extraction from herbarium specimens JF - Applications in Plant Sciences N2 - PREMISE: The generation of morphological data in evolutionary, taxonomic, and ecological studies of plants using herbarium material has traditionally been a labor-intensive task. Recent progress in machine learning using deep artificial neural networks (deep learning) for image classification and object detection has facilitated the establishment of a pipeline for the automatic recognition and extraction of relevant structures in images of herbarium specimens. METHODS AND RESULTS: We implemented an extendable pipeline based on state-of-the-art deep-learning object-detection methods to collect leaf images from herbarium specimens of two species of the genus Leucanthemum. Using 183 specimens as the training data set, our pipeline extracted one or more intact leaves in 95% of the 61 test images. CONCLUSIONS: We establish GinJinn as a deep-learning object-detection tool for the automatic recognition and extraction of individual leaves or other structures from herbarium specimens. Our pipeline offers greater flexibility and a lower entrance barrier than previous image-processing approaches based on hand-crafted features. KW - Deep Learning KW - herbarium specimens KW - object detection KW - visual recognition KW - Deep Learning KW - Objekterkennung KW - Maschinelles Sehen KW - Pflanzen Y1 - 2020 U6 - https://doi.org/10.1002/aps3.11351 SN - 2168-0450 VL - 8 IS - 6 SP - e11351 PB - Wiley, Botanical Society of America ER - TY - CHAP A1 - Palm, Christoph A1 - Keysers, Daniel A1 - Lehmann, Thomas M. A1 - Spitzer, Klaus T1 - Gabor Filtering of Complex Hue/Saturation Images for Color Texture Classification T2 - Proceedings of the 5th Joint Conference on Information Science (JCIS) 2, The Association for Intelligent Machinery, Atlantic City, NJ, 2000 N2 - Objective: Complex hue/saturation images as a new approach for color texture classification using Gabor filters are introduced and compared with common techniques. Method: The interpretation of hue and saturationas polar coordinates allows direct use of the HSV-colorspace for Fourier transform. This technique is applied for Gabor feature extraction of color textures. In contrast to other color features based on the RGB-colorspace [1] the combination of color bands is done previous to the filtering. Results: The performance of the new HS-featuresis compared with that of RGB based as well as grayscale Gabor features by evaluating the classifi-cation of 30 natural textures. The new HS-featuresshow same results like the best RGB features but allow a more compact representation. On the averagethe color features improve the results of grayscale features. Conclusion: The consideration of the color information enhances the classification of color texture. The choice of colorspace cannot be adjudged finally, but the introduced features suggest the use of the HSV-colorspace with less features than RGB. Y1 - 2000 UR - http://www.keysers.net/daniel/files/JCIS2000_palm.pdf SP - 45 EP - 49 ER - TY - CHAP A1 - Eiben, Björn A1 - Palm, Christoph A1 - Pietrzyk, Uwe A1 - Davatzikos, Christos A1 - Amunts, Katrin T1 - Error Correction using Registration for Blockface Volume Reconstruction of Serial Histological Sections of the Human Brain T2 - Bildverarbeitung für die Medizin 2010; Algorithmen - Systeme - Anwendungen ; Proceedings des Workshops vom 22. bis 25. März 2009 in Heidelberg N2 - For accurate registration of histological sections blockface images are frequently used as three dimensional reference. However, due to the use of endocentric lenses the images suffer from perspective errors such as scaling and seemingly relative movement of planes which are located in different distances parallel to the imaging sensor. The suggested correction of those errors is based on the estimation of scaling factors derived from image registration of regions characterized by differing distances to the point of view in neighboring sections. The correction allows the generation of a consistent three dimensional blockface volume. KW - Histologie KW - Diagnostik KW - Bildgebendes Verfahren KW - Schnittdarstellung KW - Fehlerbehandlung Y1 - 2010 UR - http://ceur-ws.org/Vol-574/bvm2010_61.pdf SP - 301 EP - 305 PB - Springer CY - Berlin ER - TY - CHAP A1 - Scholl, Ingrid A1 - Palm, Christoph A1 - Sovakar, Abhijit A1 - Lehmann, Thomas M. A1 - Spitzer, Klaus ED - Arnolds, B. ED - Mueller, H. ED - Saupe, D. ED - Tolxdorff, Thomas T1 - Quantitative Analyse der Stimmlippen T2 - 5. Workshop Digitale Bildverarbeitung in der Medizin, Universität Freiburg, 10.-11. März 1997 KW - Konturverfolgung KW - Snakes KW - Farbanalyse KW - Dichromatisches Reflexionsmodell KW - Farbkonstanz KW - Laryngoskopie Y1 - 1997 UR - https://pdfs.semanticscholar.org/9a0c/9e7dc883ccf6e8a8a686c28422238adb5f35.pdf SP - 81 EP - 86 ER - TY - JOUR A1 - Ilgner, Justus F. R. A1 - Palm, Christoph A1 - Schütz, Andreas G. A1 - Spitzer, Klaus A1 - Westhofen, Martin A1 - Lehmann, Thomas M. T1 - Colour Texture Analysis for Quantitative Laryngoscopy JF - Acta Otolaryngologica N2 - Whilst considerable progress has been made in enhancing the quality of indirect laryngoscopy and image processing, the evaluation of clinical findings is still based on the clinician's judgement. The aim of this paper was to examine the feasibility of an objective computer-based method for evaluating laryngeal disease. Digitally recorded images obtained by 90 degree- and 70 degree-angled indirect rod laryngoscopy using standardized white balance values were made of 16 patients and 19 healthy subjects. The digital images were evaluated manually by the clinician based on a standardized questionnaire, and suspect lesions were marked and classified on the image. Following colour separation, normal vocal cord areas as well as suspect lesions were analyzed automatically using co-occurrence matrices, which compare colour differences between neighbouring pixels over a predefined distance. Whilst colour histograms did not provide sufficient information for distinguishing between healthy and diseased tissues, consideration of the blue content of neighbouring pixels enabled a correct classification in 81.4% of cases. If all colour channels (red, green and blue) were regarded simultaneously, the best classification correctness obtained was 77.1%. Although only a very basic classification differentiating between healthy and diseased tissue was attempted, the results showed progress compared to grey-scale histograms, which have been evaluated before. The results document a first step towards an objective, machine-based classification of laryngeal images, which could provide the basis for further development of an expert system for use in indirect laryngoscopy. KW - diagnostic laryngoscopy KW - electronic imaging KW - endoscopy KW - neoplastic larynx disease Y1 - 2003 U6 - https://doi.org/10.1080/00016480310000412 VL - 123 SP - 730 EP - 734 ER - TY - JOUR A1 - Becker, Johanna Sabine A1 - Matusch, Andreas A1 - Palm, Christoph A1 - Salber, Dagmar A1 - Morton, Kathryn A. A1 - Becker, Julia Susanne T1 - Bioimaging of metals in brain tissue by laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) and metallomics JF - Metallomics N2 - Laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) has been developed and established as an emerging technique in the generation of quantitative images of metal distributions in thin tissue sections of brain samples (such as human, rat and mouse brain), with applications in research related to neurodegenerative disorders. A new analytical protocol is described which includes sample preparation by cryo-cutting of thin tissue sections and matrix-matched laboratory standards, mass spectrometric measurements, data acquisition, and quantitative analysis. Specific examples of the bioimaging of metal distributions in normal rodent brains are provided. Differences to the normal were assessed in a Parkinson’s disease and a stroke brain model. Furthermore, changes during normal aging were studied. Powerful analytical techniques are also required for the determination and characterization of metal-containing proteins within a large pool of proteins, e.g., after denaturing or non-denaturing electrophoretic separation of proteins in one-dimensional and two-dimensional gels. LA-ICP-MS can be employed to detect metalloproteins in protein bands or spots separated after gel electrophoresis. MALDI-MS can then be used to identify specific metal-containing proteins in these bands or spots. The combination of these techniques is described in the second section. KW - ICP-Massenspektrometrie KW - Metalle KW - Metallproteide KW - Elektrophorese KW - Gehirn Y1 - 2010 U6 - https://doi.org/10.1039/b916722f IS - 2 SP - 104 EP - 111 PB - Oxford Academic Press ER - TY - CHAP A1 - Maier, Johannes A1 - Huber, Michaela A1 - Katzky, Uwe A1 - Perret, Jerome A1 - Wittenberg, Thomas A1 - Palm, Christoph T1 - Force-Feedback-assisted Bone Drilling Simulation Based on CT Data T2 - Bildverarbeitung für die Medizin 2018; Algorithmen - Systeme - Anwendungen; Proceedings des Workshops vom 11. bis 13. März 2018 in Erlangen N2 - In order to fix a fracture using minimally invasive surgery approaches, surgeons are drilling complex and tiny bones with a 2 dimensional X-ray as single imaging modality in the operating room. Our novel haptic force-feedback and visual assisted training system will potentially help hand surgeons to learn the drilling procedure in a realistic visual environment. Within the simulation, the collision detection as well as the interaction between virtual drill, bone voxels and surfaces are important. In this work, the chai3d collision detection and force calculation algorithms are combined with a physics engine to simulate the bone drilling process. The chosen Bullet-Physics-Engine provides a stable simulation of rigid bodies, if the collision model of the drill and the tool holder is generated as a compound shape. Three haptic points are added to the K-wire tip for removing single voxels from the bone. For the drilling process three modes are proposed to emulate the different phases of drilling in restricting the movement of a haptic device. KW - Handchirurgie KW - Osteosynthese KW - Simulation KW - Lernprogramm Y1 - 2018 U6 - https://doi.org/10.1007/978-3-662-56537-7_78 SP - 291 EP - 296 PB - Springer CY - Berlin ER - TY - CHAP A1 - Palm, Christoph A1 - Pelkmann, Annegret A1 - Lehmann, Thomas M. A1 - Spitzer, Klaus T1 - Distortion Correction of Laryngoscopic Images T2 - Advances in quantitative laryngoscopy, voice and speech research, Proceedings of the 3rd international workshop Aachen, RWTH N2 - Laryngoscopic images of the vocal tract are used for diagnostic purposes. Quantitative mea-surements like changes of the glottis size or the surface of the vocal cords during an image sequence can be helpful to describe the healing process or to compare the findings of diffe-rent patients. Typically the endoscopic images are circulary symmetric distorted (barrel di-stortion). Therefore measurements of geometric dimensions depend on the object´s position in the image. In this paper an algorithm is presented which allows the computation of the translational invariant "real" object size by correcting the image distortion without using additional calibration of the optical environment. KW - image distortion KW - camera calibration KW - multiple regression analysis Y1 - 1998 UR - https://pdfs.semanticscholar.org/e9d8/eb27af24bd79f482821441c2bf0eee7b3fe6.pdf?_ga=2.183754286.985176231.1591560247-1467258391.1581026068 SP - 117 EP - 125 ER - TY - CHAP A1 - Palm, Christoph A1 - Vieten, Andrea A1 - Bauer, Dagmar A1 - Pietrzyk, Uwe T1 - Evaluierung von Registrierungsstrategien zur multimodalen 3D-Rekonstruktion von Rattenhirnschnitten T2 - Bildverarbeitung für die Medizin 2006 N2 - In dieser Arbeit werden drei Strategien zur 3D Stapelung von multimodalen Schnittbildern vorgestellt. Die Strategien werden experimentell anhand von Dualtracer-Autoradiographien evaluiert. Dazu werden neue Maße zur Beschreibung der Konsistenz innerhalb einer Modalität und der Konsistenz der Modalitäten untereinander entwickelt, die auf bekannten Registrierungsmetriken basieren. Gerade bezüglich der Konsistenz der Modalitäten untereinander zeigen zwei Strategien die besten Resultate: (1) abwechselnde multimodale Registrierung (2) monomodale Rekonstruktion einer Modalität und multimodale 2D Registrierung der zweiten Modalität. KW - Registrierung KW - Dreidimensionale Rekonstruktion KW - Gehirn KW - Schnittpräparat Y1 - 2006 U6 - https://doi.org/10.1007/3-540-32137-3_51 SP - 251 EP - 255 PB - Springer CY - Berlin ER - TY - GEN A1 - Gräßel, David A1 - Axer, Markus A1 - Palm, Christoph A1 - Dammers, Jürgen A1 - Amunts, Katrin A1 - Pietrzyk, Uwe A1 - Zilles, Karl T1 - Visualization of Fiber Tracts in the Postmortem Human Brain by Means of Polarized Light T2 - NeuroImage KW - Gehirn KW - Bildgebendes Verfahren KW - Polarisiertes Licht KW - Pathologische Anatomie Y1 - 2009 U6 - https://doi.org/10.1016/S1053-8119(09)71415-6 VL - 47 IS - Suppl. 1 SP - 142 ER - TY - GEN A1 - Schroeder, Josef A. A1 - Semmelmann, Matthias A1 - Siegmund, Heiko A1 - Grafe, Claudia A1 - Evert, Matthias A1 - Palm, Christoph T1 - Improved interactive computer-assisted approach for evaluation of ultrastructural cilia abnormalities T2 - Ultrastructural Pathology KW - Zilie KW - Ultrastruktur KW - Anomalie KW - Bildverarbeitung KW - Computerunterstütztes Verfahren Y1 - 2017 U6 - https://doi.org/10.1080/01913123.2016.1270978 VL - 41 IS - 1 SP - 112 EP - 113 ER - TY - CHAP A1 - Palm, Christoph A1 - Dehnhardt, Markus A1 - Vieten, Andrea A1 - Pietrzyk, Uwe T1 - 3D rat brain tumor reconstruction T2 - Biomedizinische Technik KW - Dreidimensionale Rekonstruktion KW - Hirntumor Y1 - 2005 VL - 50 IS - Suppl. 1, Part 1 SP - 597 EP - 598 ER - TY - CHAP A1 - Palm, Christoph A1 - Scholl, Ingrid A1 - Lehmann, Thomas M. A1 - Spitzer, Klaus ED - Greiser, E. ED - Wischnewsky, M. T1 - Nutzung eines Farbkonstanz-Algorithmus zur Entfernung von Glanzlichtern in laryngoskopischen Bildern T2 - Methoden der Medizinischen Informatik, Biometrie und Epidemiologie in der modernen Informationsgesellschaft N2 - 1 Einführung Funktionelle und organische Störungen im Larynx beeinträchtigen die Ausdrucksfähigkeit des Menschen. Zur Diagnostik und Verlaufkontrolle werden die Stimmlippen im Larynx mit Hilfe der Video-Laryngoskopie aufgenommen. Zur optimalen Farbmessung wird dazu an das Lupenendoskop eine 3-Chip-CCD-Kamera angeschlossen, die eine unabhängige Aufnahme der drei Farbkanäle erlaubt. Die bisherige subjektive Befundung ist von der Erfahrung des Untersuchers abhängig und läßt nur eine grobe Klassifikation der Krankheitsbilder zu. Zur Objektivierung werden daher quantitative Parameter für Farbe, Textur und Schwingung entwickelt. Neben dem Einfluß der wechselnden Lichtquellenfarbe auf den Farbeindruck ist die Sekretauflage auf den Stimmlippen ein Problem bei der Farb-und Texturanalyse. Sie kann zu ausgedehnten Glanzlichtern führen und so weite Bereiche der Stimmlippen für die Farb-und Texturanalyse unbrauchbar machen. Dieser Beitrag stellt einen Farbkonstanz-Algorithmus vor, der unabhängig von der Lichtquelle quantitative Farbwerte des Gewebes liefert und die Glanzlichtdetektion und -elimination ermöglicht. 2 Methodik Ziel des Farbkonstanz-Algorithmus ist die Trennung von Lichtquellen-und Gewebefarbe. Unter Verwendung des dichromatischen Reflexionsmodells [1] kann die Oberflächenreflexion mit der Farbe der Lichtquelle und die Körperreflexion mit der Gewebefarbe identifiziert werden. Der Farbeindruck entsteht aus der Linearkombination beider Farbkomponenten. Ihre Gewichtung ist von der Aufnahmegeometrie abhängig, insbesondere vom Winkel zwischen Oberflächennormalen und dem Positionsvektor der Lichtquelle. In einem zweistufigen Verfahren wird zunächst die Lichtquellenfarbe geschätzt, dann die Gewebefarbe ermittelt. Hieraus können beide Farbanteile durch die Berechnung der Gewichtsfaktoren pixelweise getrennt werden. KW - Farbkonstanz KW - Glanzlichtelimination KW - medizinische Bildverarbeitung KW - dichromatisches Reflexionsmodell Y1 - 1998 SN - 9783820813357 SP - 300 EP - 303 PB - MMV Medien und Medizin CY - München ER - TY - JOUR A1 - Osterholt, Tobias A1 - Salber, Dagmar A1 - Matusch, Andreas A1 - Becker, Johanna Sabine A1 - Palm, Christoph T1 - IMAGENA: Image Generation and Analysis BT - An Interactive Software Tool handling LA-ICP-MS Data JF - International Journal of Mass Spectrometry N2 - Metals are involved in many processes of life. They are needed for enzymatic reactions, are involved in healthy processes but also yield diseases if the metal homeostasis is disordered. Therefore, the interest to assess the spatial distribution of metals is rising in biomedical science. Imaging metal (and non-metal) isotopes by laser ablation mass spectrometry with inductively coupled plasma (LA-ICP-MS) requires a special software solution to process raw data obtained by scanning a sample line-by-line. As no software ready to use was available we developed an interactive software tool for Image Generation and Analysis (IMAGENA). Unless optimised for LA-ICP-MS, IMAGENA can handle other raw data as well. The general purpose was to reconstruct images from a continuous list of raw data points, to visualise these images, and to convert them into a commonly readable image file format that can be further analysed by standard image analysis software. The generation of the image starts with loading a text file that holds a data column of every measured isotope. Specifying general spatial domain settings like the data offset and the image dimensions is done by the user getting a direct feedback by means of a preview image. IMAGENA provides tools for calibration and to correct for a signal drift in the y-direction. Images are visualised in greyscale as well a pseudo-colours with possibilities for contrast enhancement. Image analysis is performed in terms of smoothed line plots in row and column direction. KW - LA-ICP-MS KW - ICP-Massenspektrometrie KW - Bilderzeugung KW - Graphische Benutzeroberfläche KW - Image generation KW - Image analysis KW - Graphical user interface Y1 - 2011 U6 - https://doi.org/10.1016/j.ijms.2011.03.010 VL - 307 IS - 1-3 SP - 232 EP - 239 ER - TY - JOUR A1 - Palm, Christoph A1 - Vieten, Andrea A1 - Salber, Dagmar A1 - Pietrzyk, Uwe T1 - Evaluation of Registration Strategies for Multi-modality Images of Rat Brain Slices JF - Physics in Medicine and Biology N2 - In neuroscience, small-animal studies frequently involve dealing with series of images from multiple modalities such as histology and autoradiography. The consistent and bias-free restacking of multi-modality image series is obligatory as a starting point for subsequent non-rigid registration procedures and for quantitative comparisons with positron emission tomography (PET) and other in vivo data. Up to now, consistency between 2D slices without cross validation using an inherent 3D modality is frequently presumed to be close to the true morphology due to the smooth appearance of the contours of anatomical structures. However, in multi-modality stacks consistency is difficult to assess. In this work, consistency is defined in terms of smoothness of neighboring slices within a single modality and between different modalities. Registration bias denotes the distortion of the registered stack in comparison to the true 3D morphology and shape. Based on these metrics, different restacking strategies of multi-modality rat brain slices are experimentally evaluated. Experiments based on MRI-simulated and real dual-tracer autoradiograms reveal a clear bias of the restacked volume despite quantitatively high consistency and qualitatively smooth brain structures. However, different registration strategies yield different inter-consistency metrics. If no genuine 3D modality is available, the use of the so-called SOP (slice-order preferred) or MOSOP (modality-and-slice-order preferred) strategy is recommended. KW - Histologie KW - Bildgebendes Verfahren KW - Schnittdarstellung KW - Multimodales Verfahren Y1 - 2009 U6 - https://doi.org/10.1088/0031-9155/54/10/021 VL - 54 IS - 10 SP - 3269 EP - 3289 ER - TY - JOUR A1 - Dammers, Jürgen A1 - Axer, Markus A1 - Gräßel, David A1 - Palm, Christoph A1 - Zilles, Karl A1 - Amunts, Katrin A1 - Pietrzyk, Uwe T1 - Signal enhancement in polarized light imaging by means of independent component analysis JF - NeuroImage N2 - Polarized light imaging (PLI) enables the evaluation of fiber orientations in histological sections of human postmortem brains, with ultra-high spatial resolution. PLI is based on the birefringent properties of the myelin sheath of nerve fibers. As a result, the polarization state of light propagating through a rotating polarimeter is changed in such a way that the detected signal at each measurement unit of a charged-coupled device (CCD) camera describes a sinusoidal signal. Vectors of the fiber orientation defined by inclination and direction angles can then directly be derived from the optical signals employing PLI analysis. However, noise, light scatter and filter inhomogeneities interfere with the original sinusoidal PLI signals. We here introduce a novel method using independent component analysis (ICA) to decompose the PLI images into statistically independent component maps. After decomposition, gray and white matter structures can clearly be distinguished from noise and other artifacts. The signal enhancement after artifact rejection is quantitatively evaluated in 134 histological whole brain sections. Thus, the primary sinusoidal signals from polarized light imaging can be effectively restored after noise and artifact rejection utilizing ICA. Our method therefore contributes to the analysis of nerve fiber orientation in the human brain within a micrometer scale. KW - Bildgebendes Verfahren KW - Polarisiertes Licht KW - Signalverarbeitung KW - Signaltrennung KW - Komponentenanalyse KW - Gehirn Y1 - 2010 U6 - https://doi.org/10.1016/j.neuroimage.2009.08.059 VL - 49 IS - 2 SP - 1241 EP - 1248 PB - Elsevier ER - TY - CHAP A1 - Hassan, H. A1 - Ilgner, Justus F. R. A1 - Palm, Christoph A1 - Lehmann, Thomas M. A1 - Spitzer, Klaus A1 - Westhofen, Martin ED - Lehmann, Thomas M. ED - Spitzer, Klaus ED - Tolxdorff, Thomas T1 - Objective Judgement of Endoscopic Laryngeal Images T2 - Advances in Quantitative Laryngoscopy, Voice and Speech Research, Proceedings of the 3rd International Workshop, RWTH Aachen N2 - Video Documentation of endoscopic findings simplifies diagnostic counseling of the patient and aids pre-operative discussion among the medical team. Judgment of such images is still subjective and can not give a quantitative evaluation of the disease process regarding diagnosis or response to treatment. Modern treatment of early laryngeal cancer with laserablation requires intensive follow up and frequent direct laryngoscopy under general anesthesia with blind biopsies to detect any tumor residual or recurrence. Inflammatory conditions of the larynx are frequently confused with other causes of dysphonia. Mapping anddigital analysis of the documented image will suggest the tumor site and avoids undue blind biopsies under anesthesia. However, varying illumination results in different colors reflected from the same object. To achieve quantitative analysis, color constancy has to be assured. Inthis paper, the environment is presented which allow the objective judgment of larngoscopies. KW - Laryngoscopy KW - Diagnosis KW - Image processing KW - Quantitative Image analysis KW - Colorconstancy Y1 - 1998 UR - https://citeseerx.ist.psu.edu/doc_view/pid/caf5bedf5cf68ed3be68054b140a1241f4f278e2 SP - 135 EP - 142 ER - TY - GEN ED - Tolxdorff, Thomas ED - Deserno, Thomas M. ED - Handels, Heinz ED - Maier, Andreas ED - Maier-Hein, Klaus H. ED - Palm, Christoph T1 - Bildverarbeitung für die Medizin 2020 BT - Algorithmen – Systeme – Anwendungen. Proceedings des Workshops vom 15. bis 17. März 2020 in Berlin N2 - In den letzten Jahren hat sich der Workshop "Bildverarbeitung für die Medizin" durch erfolgreiche Veranstaltungen etabliert. Ziel ist auch 2020 wieder die Darstellung aktueller Forschungsergebnisse und die Vertiefung der Gespräche zwischen Wissenschaftlern, Industrie und Anwendern. Die Beiträge dieses Bandes - einige davon in englischer Sprache - umfassen alle Bereiche der medizinischen Bildverarbeitung, insbesondere Bildgebung und -akquisition, Maschinelles Lernen, Bildsegmentierung und Bildanalyse, Visualisierung und Animation, Zeitreihenanalyse, Computerunterstützte Diagnose, Biomechanische Modellierung, Validierung und Qualitätssicherung, Bildverarbeitung in der Telemedizin u.v.m. KW - Bildanalyse KW - Bildverarbeitung KW - Computerunterstützte Medizin KW - Deep Learning KW - Visualisierung Y1 - 2020 SN - 978-3-658-29266-9 U6 - https://doi.org/10.1007/978-3-658-29267-6 PB - Springer Vieweg CY - Wiesbaden ER - TY - CHAP A1 - Pietrzyk, Uwe A1 - Palm, Christoph A1 - Beyer, Thomas T1 - Fusion strategies in multi-modality imaging T2 - Medical Physics, Vol 2. Proceedings of the jointly held Congresses: ICMP 2005, 14th International Conference of Medical Physics of the International Organization for Medical Physics (IOMP), the European Federation of Organizations in Medical Physics (EFOMP) and the German Society of Medical Physics (DGMP) ; BMT 2005, 39th Annual Congress of the German Society for Biomedical Engineering (DGBMT) within VDE ; 14th - 17th September 2005, Nuremberg, Germany KW - Bildgebendes Verfahren KW - Registrierung Y1 - 2005 SP - 1446 EP - 1447 ER - TY - CHAP A1 - Fischer, B. A1 - Palm, Christoph A1 - Lehmann, Thomas M. A1 - Spitzer, Klaus T1 - Selektion von Farbtexturmerkmalen zur Tumorklassifikation dermatoskopischer Fotografien T2 - Bildverarbeitung für die Medizin 2002 Y1 - 2002 SP - 238 EP - 241 PB - Springer CY - Berlin ER - TY - JOUR A1 - Lehmann, Thomas M. A1 - Palm, Christoph T1 - Color Line Search for Illuminant Estimation in Real World Scenes JF - Journal of the Optical Society of America (JOSA) A N2 - The estimation of illuminant color is mandatory for many applications in the field of color image quantification. However, it is an unresolved problem if no additional heuristics or restrictive assumptions apply. Assuming uniformly colored and roundly shaped objects, Lee has presented a theory and a method for computing the scene-illuminant chromaticity from specular highlights [H. C. Lee, J. Opt. Soc. Am. A 3, 1694 (1986)]. However, Lee’s method, called image path search, is less robust to noise and is limited in the handling of microtextured surfaces. We introduce a novel approach to estimate the color of a single illuminant for noisy and microtextured images, which frequently occur in real-world scenes. Using dichromatic regions of different colored surfaces, our approach, named color line search, reverses Lee’s strategy of image path search. Reliable color lines are determined directly in the domain of the color diagrams by three steps. First, regions of interest are automatically detected around specular highlights, and local color diagrams are computed. Second, color lines are determined according to the dichromatic reflection model by Hough transform of the color diagrams. Third, a consistency check is applied by a corresponding path search in the image domain. Our method is evaluated on 40 natural images of fruit and vegetables. In comparison with those of Lee’s method, accuracy and stability are substantially improved. In addition, the color line search approach can easily be extended to scenes of objects with macrotextured surfaces. Y1 - 2001 U6 - https://doi.org/10.1364/JOSAA.18.002679 VL - 18 IS - 11 SP - 2679 EP - 2691 ER - TY - CHAP A1 - Palm, Christoph A1 - Scholl, Ingrid A1 - Lehmann, Thomas M. A1 - Spitzer, Klaus ED - Lehmann, Thomas M. ED - Scholl, Ingrid ED - Spitzer, Klaus T1 - Trennung von diffuser und spiegelnder Reflexion in Farbbildern des Larynx zur Untersuchung von Farb- und Formmerkmalen der Stimmlippen T2 - Bildverarbeitung für die Medizin. Algorithmen, Systeme, Anwendungen. Proceedings des Aachener Workshops am 8. und 9. November 1996 N2 - Zur diagnostischen Unterstützung bei der Befundung laryngealer Erkrankungen soll eine Farb- und Formanalyse der Stimmlippen durchgeführt werden. In diesem Beitrag wird ein Verfahren zur Trennung der spiegelnden und diffusen Reflexionsanteile in Farbbildern des Larynx vorgestellt. Die Farbe der diffusen Komponente entspricht dabei der beleuchtungsunabhängigen Objektfarbe, während deren Wichtungsfaktoren als Eingabe für Shape-from-Shading-Verfahren zur Oberflächenrekonstruktion dienen. KW - Laryngoskopie KW - Farbbild KW - Reflexion Y1 - 1996 UR - https://scholar.google.de/citations?user=nc0XkcMAAAAJ&hl=fa#d=gs_md_cita-d&u=%2Fcitations%3Fview_op%3Dview_citation%26hl%3Dfa%26user%3Dnc0XkcMAAAAJ%26citation_for_view%3Dnc0XkcMAAAAJ%3AqjMakFHDy7sC%26tzom%3D-120 SP - 229 EP - 234 PB - Verlag der Augustinus-Buchhandlung CY - Aachen ER - TY - CHAP A1 - Palm, Christoph A1 - Lehmann, Thomas M. A1 - Bredno, J. A1 - Neuschaefer-Rube, C. A1 - Klajman, S. A1 - Spitzer, Klaus T1 - Automated Analysis of Stroboscopic Image Sequences by Vibration Profiles T2 - Advances in Quantitative Laryngoscopy, Voice and Speech Research, Procs. 5th International Workshop N2 - A method for automated segmentation of vocal cords in stroboscopic video sequences is presented. In contrast to earlier approaches, the inner and outer contours of the vocal cords are independently delineated. Automatic segmentation of the low contrasted images is carried out by connecting the shape constraint of a point distribution model to a multi-channel regionbased balloon model. This enables us to robustly compute a vibration profile that is used as a new diagnostic tool to visualize several vibration parameters in only one graphic. The vibration profiles are studied in two cases: one physiological vibration and one functional pathology. KW - Vibration Profile KW - Stroboscopic Images KW - Contour Detection KW - Balloon Model KW - Point Distribution Model Y1 - 2001 UR - https://www.researchgate.net/publication/242439073_Automated_Analysis_of_Stroboscopic_Image_Sequences_by_Vibration_Profiles ER - TY - CHAP A1 - Palm, Christoph T1 - Fusion of Serial 2D Section Images and MRI Reference BT - an Overview T2 - Workshop Innovative Verarbeitung bioelektrischer und biomagnetischer Signale (bbs2014), Berlin, 10.04.2014 N2 - Serial 2D section images with high resolution, resulting from innovative imaging methods become even more valuable, if they are fused with in vivo volumes. Achieving this goal, the 3D context of the sections would be restored, the deformations would be corrected and the artefacts would be eliminated. However, the registration in this field faces big challenges and is not solved in general. On the other hand, several approaches have been introduced dealing at least with some of these difficulties. Here, a brief overview of the topic is given and some of the solutions are presented. It does not constitute the claim to be a complete review, but could be a starting point for those who are interested in this field. KW - Kernspintomografie KW - Optimierung KW - Magnetic Resonance Imaging KW - MRI KW - Literaturbericht Y1 - 2014 U6 - https://doi.org/10.13140/RG.2.1.1358.3449 ER - TY - GEN A1 - Weigert, Markus A1 - Beyer, Thomas A1 - Quick, Harald H. A1 - Pietrzyk, Uwe A1 - Palm, Christoph A1 - Müller, Stefan P. T1 - Generation of a MRI reference data set for the validation of automatic, non-rigid image co-registration algorithms T2 - Nuklearmedizin KW - Kernspintomografie KW - Referenzdaten KW - Registrierung KW - Algorithmus Y1 - 2007 VL - 46 IS - 2 SP - A116 ER - TY - GEN A1 - Weigert, Markus A1 - Palm, Christoph A1 - Quick, Harald H. A1 - Müller, Stefan P. A1 - Pietrzyk, Uwe A1 - Beyer, Thomas T1 - Template for MR-based attenuation correction for whole-body PET/MR imaging T2 - Nuklearmedizin KW - Kernspintomografie KW - Positronen-Emissions-Tomografie KW - Bildgebendes Verfahren KW - Schwächung Y1 - 2007 VL - 46 IS - 2 SP - A115 ER - TY - GEN A1 - Palm, Christoph A1 - Crum, William R. A1 - Pietrzyk, Uwe A1 - Hawkes, David J. T1 - Application of Fluid and Elastic Registration Methods to Histological Rat Brain Sections T2 - Biomedizinische Technik KW - Registrierung KW - Gehirn KW - Schnittdarstellung Y1 - 2007 VL - 52 IS - Suppl. ER - TY - JOUR A1 - Palm, Christoph A1 - Lehmann, Thomas M. T1 - Classification of Color Textures by Gabor Filtering JF - Machine GRAPHICS & VISION Y1 - 2002 VL - 11 IS - 2/3 SP - 195 EP - 219 ER - TY - JOUR A1 - Maier, Johannes A1 - Weiherer, Maximilian A1 - Huber, Michaela A1 - Palm, Christoph T1 - Optically tracked and 3D printed haptic phantom hand for surgical training system JF - Quantitative Imaging in Medicine and Surgery N2 - Background: For surgical fixation of bone fractures of the human hand, so-called Kirschner-wires (K-wires) are drilled through bone fragments. Due to the minimally invasive drilling procedures without a view of risk structures like vessels and nerves, a thorough training of young surgeons is necessary. For the development of a virtual reality (VR) based training system, a three-dimensional (3D) printed phantom hand is required. To ensure an intuitive operation, this phantom hand has to be realistic in both, its position relative to the driller as well as in its haptic features. The softest 3D printing material available on the market, however, is too hard to imitate human soft tissue. Therefore, a support-material (SUP) filled metamaterial is used to soften the raw material. Realistic haptic features are important to palpate protrusions of the bone to determine the drilling starting point and angle. An optical real-time tracking is used to transfer position and rotation to the training system. Methods: A metamaterial already developed in previous work is further improved by use of a new unit cell. Thus, the amount of SUP within the volume can be increased and the tissue is softened further. In addition, the human anatomy is transferred to the entire hand model. A subcutaneous fat layer and penetration of air through pores into the volume simulate shiftability of skin layers. For optical tracking, a rotationally symmetrical marker attached to the phantom hand with corresponding reference marker is developed. In order to ensure trouble-free position transmission, various types of marker point applications are tested. Results: Several cuboid and forearm sample prints lead to a final 30 centimeter long hand model. The whole haptic phantom could be printed faultless within about 17 hours. The metamaterial consisting of the new unit cell results in an increased SUP share of 4.32%. Validated by an expert surgeon study, this allows in combination with a displacement of the uppermost skin layer a good palpability of the bones. Tracking of the hand marker in dodecahedron design works trouble-free in conjunction with a reference marker attached to the worktop of the training system. Conclusions: In this work, an optically tracked and haptically correct phantom hand was developed using dual-material 3D printing, which can be easily integrated into a surgical training system. KW - Handchirurgie KW - 3D-Druck KW - Lernprogramm KW - Zielverfolgung KW - HaptiVisT KW - Dual-material 3D printing KW - hand surgery training KW - metamaterial KW - tissue imitating phantom hand Y1 - 2020 U6 - https://doi.org/10.21037/qims.2019.12.03 N1 - Corresponding author: Christoph Palm VL - 10 IS - 02 SP - 340 EP - 455 PB - AME Publishing Company CY - Hong Kong, China ER - TY - GEN A1 - Bauer, Dagmar A1 - Stoffels, Gabriele A1 - Pauleit, Dirk A1 - Palm, Christoph A1 - Hamacher, Kurt A1 - Coenen, Heinz H. A1 - Langen, Karl T1 - Uptake of F-18-fluoroethyl-L-tyrosine and H-3-L-methionine in focal cortical ischemia T2 - The Journal of Nuclear Medicine N2 - Objectives: C-11-methionine (MET) is particularly useful in brain tumor diagnosis but unspecific uptake e.g. in cerebral ischemia has been reported (1). The F-18-labeled amino acid O-(2-[F-18]fluoroethyl)-L-tyrosine (FET) shows a similar clinical potential as MET in brain tumor diagnosis but is applicable on a wider clinical scale. The aim of this study was to evaluate the uptake of FET and H-3-MET in focal cortical ischemia in rats by dual tracer autoradiography. Methods: Focal cortical ischemia was induced in 12 Fisher CDF rats using the photothrombosis model (PT). One day (n=3) , two days (n=5) and 7 days (n=4) after induction of the lesion FET and H-3-MET were injected intravenously. One hour after tracer injection animals were killed, the brains were removed immediately and frozen in 2-methylbutane at -50°C. Brains were cut in coronal sections (thickness: 20 µm) and exposed first to H-3 insensitive photoimager plates to measure FET distribution. After decay of F-18 the distribution of H-3-MET was determined. The autoradiograms were evaluated by regions of interest (ROIs) placed on areas with increased tracer uptake in the PT and the contralateral brain. Lesion to brain ratios (L/B) were calculated by dividing the mean uptake in the lesion and the brain. Based on previous studies in gliomas a L/B ratio > 1.6 was considered as pathological for FET. Results: Variable increased uptake of both tracers was observed in the PT and its demarcation zone at all stages after PT. The cut-off level of 1.6 for FET was exceeded in 9/12 animals. One day after PT the L/B ratios were 2.0 ± 0.6 for FET vs. 2.1 ± 1.0 for MET (mean ± SD); two days after lesion 2.2 ± 0.7 for FET vs. 2.7 ± 1.0 for MET and 7 days after lesion 2.4 ± 0.4 for FET vs. 2.4 ± 0.1 for MET. In single cases discrepancies in the uptake pattern of FET and MET were observed. Conclusions: FET like MET may exhibit significant uptake in infarcted areas or the immediate vincinity which has to be considered in the differential diagnosis of unkown brain lesions. The discrepancies in the uptake pattern of FET and MET in some cases indicates either differences in the transport mechanisms of both amino acids or a different affinity for certain cellular components. Y1 - 2006 UR - http://jnm.snmjournals.org/content/47/suppl_1/284P.3 VL - 47 IS - Suppl. 1 SP - 284P ER - TY - JOUR A1 - Hutterer, Markus A1 - Hattingen, Elke A1 - Palm, Christoph A1 - Proescholdt, Martin Andreas A1 - Hau, Peter T1 - Current standards and new concepts in MRI and PET response assessment of antiangiogenic therapies in high-grade glioma patients JF - Neuro-Oncology N2 - Despite multimodal treatment, the prognosis of high-grade gliomas is grim. As tumor growth is critically dependent on new blood vessel formation, antiangiogenic treatment approaches offer an innovative treatment strategy. Bevacizumab, a humanized monoclonal antibody, has been in the spotlight of antiangiogenic approaches for several years. Currently, MRI including contrast-enhanced T1-weighted and T2/fluid-attenuated inversion recovery (FLAIR) images is routinely used to evaluate antiangiogenic treatment response (Response Assessment in Neuro-Oncology criteria). However, by restoring the blood–brain barrier, bevacizumab may reduce T1 contrast enhancement and T2/FLAIR hyperintensity, thereby obscuring the imaging-based detection of progression. The aim of this review is to highlight the recent role of imaging biomarkers from MR and PET imaging on measurement of disease progression and treatment effectiveness in antiangiogenic therapies. Based on the reviewed studies, multimodal imaging combining standard MRI with new physiological MRI techniques and metabolic PET imaging, in particular amino acid tracers, may have the ability to detect antiangiogenic drug susceptibility or resistance prior to morphological changes. As advances occur in the development of therapies that target specific biochemical or molecular pathways and alter tumor physiology in potentially predictable ways, the validation of physiological and metabolic imaging biomarkers will become increasingly important in the near future. KW - High-grade glioma KW - Antiangiogenic treatment KW - MRI KW - PET KW - Multimodal response assessment KW - Gliom KW - Antiangiogenese KW - Bildgebendes Verfahren KW - Biomarker Y1 - 2015 U6 - https://doi.org/10.1093/neuonc/nou322 VL - 17 IS - 6 SP - 784 EP - 800 ER - TY - CHAP A1 - Palm, Christoph A1 - Lehmann, Thomas M. A1 - Spitzer, Klaus T1 - Bestimmung der Lichtquellenfarbe bei der Endoskopie mikrotexturierter Oberflächen des Kehlkopfes T2 - 5. Workshop Farbbildverarbeitung, Ilmenau, 1999 N2 - Zur Unterstützung der Diagnose von Stimmlippenerkrankungen werden innerhalb des Forschungsprojektes Quantitative Digitale Laryngoskopie objektive Parameter zur Beschreibung der Bewegung, der Farbe sowie der Form der Stimmlippen entwickelt und klinisch evaluiert. Während die Bewegungsanalyse Aufschluß über funktionelle Stimmstörungen gibt, beschreiben Parameter der Farb- und Formanalyse morphologische Veränderungen des Stimmlippengewebes. In diesem Beitrag werden die Methoden und bisherigen Ergebnisse zur Bewegungs- und Farbanalyse vorgestellt. Die Bewegungsanalyse wurde mit einem erweiterten Konturmodell (Snakes) durchgeführt. Aufgrund des modifizierten Konturmodells konnten die Konturen der Stimmlippen automatisch über die gesmate Bildsequenz zuverlässig detektiert werden. Die Vermssung der Konturen liefert neue quantitative Parameter zur Befundung von laryngoskopischen Stimmlippenaufnahmen. Um die Farbeigenschaften der Stimmlippen zu bestimmen, wurde ausgehend vom RGB-Bild die Objektfarbe unabhängig von der Farbe der Lichtquelle durch Verwendung von Clusterverfahren und der Viertelkreisanalyse berechnet. Mit dieser Farbanalyse konnte die Farbe der Lichtquelle ermittelt und das beleuchtungsunabhängige Farbbild berechnet werden. Die Quanitifizierung der Rötung der Stimmlippen ist z.B. ein entscheidendes Kriterium zur Diagnostik der akuten Laryngitis. KW - Konturverfolgung KW - Snakes KW - Dichromatisches Reflexionsmodell KW - Farbkonstanz KW - Laryngoskopie Y1 - 1999 UR - http://www.germancolorgroup.de/html/Vortr_99_pdf/01_Palm.pdf SP - 3 EP - 10 ER - TY - CHAP A1 - Zehner, Alexander A1 - Szalo, Alexander Eduard A1 - Palm, Christoph T1 - GraphMIC: Easy Prototyping of Medical Image Computing Applications T2 - Interactive Medical Image Computing (IMIC), Workshop at the Medical Image Computing and Computer Assisted Interventions (MICCAI 2015), 2015, Munich N2 - GraphMIC is a cross-platform image processing application utilizing the libraries ITK and OpenCV. The abstract structure of image processing pipelines is visually represented by user interface components based on modern QtQuick technology and allows users to focus on arrangement and parameterization of operations rather than implementing the equivalent functionality natively in C++. The application's central goal is to improve and simplify the typical workflow by providing various high level features and functions like multi threading, image sequence processing and advanced error handling. A built-in python interpreter allows the creation of custom nodes, where user defined algorithms can be integrated to extend basic functionality. An embedded 2d/3d visual-izer gives feedback of the resulting image of an operation or the whole pipeline. User inputs like seed points, contours or regions are forwarded to the processing pipeline as parameters to offer semi-automatic image computing. We report the main concept of the application and introduce several features and their implementation. Finally, the current state of development as well as future perspectives of GraphMIC are discussed KW - Bildverarbeitung KW - Medizin Y1 - 2015 U6 - https://doi.org/10.13140/RG.2.1.3718.4725 N1 - Open-Access-Publikation SP - 395 EP - 400 ER - TY - CHAP A1 - Palm, Christoph A1 - Fischer, B. A1 - Lehmann, Thomas M. A1 - Spitzer, Klaus T1 - Hierarchische Wasserscheiden-Transformation zur Lippensegmentierung in Farbbildern T2 - Bildverarbeitung für die Medizin 2000 N2 - Zur Lösung komplexer Segmentierungsprobleme wird eine hierarchische und farbbasierte Wasserscheidentransformation vorgestellt. Geringe Modifikationen bezüglich Startpunktwahl und Flutungsprozess resultieren in signifikanten Verbesserungen der Segmentierung. Das Verfahren wurde zur Lippendetektion in Farbbildsequenzen eingesetzt, die zur quantitativen Beschreibung von Sprechbewegungsabläufen automatisch ausgewertet werden. Die Experimente mit 245 Bildern aus 6 Sequenzen zeigten eine Fehlerrate von 13%. KW - Hierarchische Wasserscheiden-Transformation KW - Segmentierung der Lippen KW - Bewegungsanalyse KW - Farbbildverarbeitung Y1 - 2000 U6 - https://doi.org/10.1007/978-3-642-59757-2_20 SP - 106 EP - 110 PB - Springer CY - Berlin ER - TY - CHAP A1 - Palm, Christoph A1 - Lehmann, Thomas M. A1 - Spitzer, Klaus T1 - Color Texture Analysis of Moving Vocal Cords Using Approaches from Statistics and Signal Theory T2 - Advances in Quantitative Laryngoscopy, Voice and Speech Research, Procs. 4th International Workshop, Friedrich Schiller University, Jena N2 - Textural features are applied for detection of morphological pathologies of vocal cords. Cooccurrence matrices as statistical features are presented as well as filter bank analysis by Gabor filters. Both methods are extended to handle color images. Their robustness against camera movement and vibration of vocal cords is evaluated. Classification results due to three in vivo sequences are in between 94.4 % and 98.9%. The classification errors decrease if color features are used instead of grayscale features for both statistical and Fourier features KW - Color Texture KW - Gabor Filter KW - Cooccurrence Matrix KW - Image Processing Y1 - 2000 SP - 49 EP - 56 ER - TY - CHAP A1 - Palm, Christoph A1 - Schanze, Thomas T1 - Biomedical Image and Signal Computing (BISC 2013) T2 - 58. Jahrestagung der Deutschen Gesellschaft für Medizinische Informatik, Biometrie und Epidemiologie e.V. (GMDS 2013), Lübeck, 01.-05.09.2013 Y1 - 2013 U6 - https://doi.org/doi:10.3205/13gmds257 N1 - Meeting Abstract IS - DocAbstr. 324 PB - German Medical Science GMS Publishing House CY - Düsseldorf ER - TY - BOOK A1 - Palm, Christoph T1 - Integrative Auswertung von Farbe und Textur Y1 - 2003 UR - http://publications.rwth-aachen.de/record/58707/files/Palm_Christoph.pdf PB - Der Andere Verlag ER - TY - JOUR A1 - Graßmann, Felix A1 - Mengelkamp, Judith A1 - Brandl, Caroline A1 - Harsch, Sebastian A1 - Zimmermann, Martina E. A1 - Linkohr, Birgit A1 - Peters, Annette A1 - Heid, Iris M. A1 - Palm, Christoph A1 - Weber, Bernhard H. F. T1 - A Deep Learning Algorithm for Prediction of Age-Related Eye Disease Study Severity Scale for Age-Related Macular Degeneration from Color Fundus Photography JF - Ophtalmology N2 - Purpose Age-related macular degeneration (AMD) is a common threat to vision. While classification of disease stages is critical to understanding disease risk and progression, several systems based on color fundus photographs are known. Most of these require in-depth and time-consuming analysis of fundus images. Herein, we present an automated computer-based classification algorithm. Design Algorithm development for AMD classification based on a large collection of color fundus images. Validation is performed on a cross-sectional, population-based study. Participants. We included 120 656 manually graded color fundus images from 3654 Age-Related Eye Disease Study (AREDS) participants. AREDS participants were >55 years of age, and non-AMD sight-threatening diseases were excluded at recruitment. In addition, performance of our algorithm was evaluated in 5555 fundus images from the population-based Kooperative Gesundheitsforschung in der Region Augsburg (KORA; Cooperative Health Research in the Region of Augsburg) study. Methods. We defined 13 classes (9 AREDS steps, 3 late AMD stages, and 1 for ungradable images) and trained several convolution deep learning architectures. An ensemble of network architectures improved prediction accuracy. An independent dataset was used to evaluate the performance of our algorithm in a population-based study. Main Outcome Measures. κ Statistics and accuracy to evaluate the concordance between predicted and expert human grader classification. Results. A network ensemble of 6 different neural net architectures predicted the 13 classes in the AREDS test set with a quadratic weighted κ of 92% (95% confidence interval, 89%–92%) and an overall accuracy of 63.3%. In the independent KORA dataset, images wrongly classified as AMD were mainly the result of a macular reflex observed in young individuals. By restricting the KORA analysis to individuals >55 years of age and prior exclusion of other retinopathies, the weighted and unweighted κ increased to 50% and 63%, respectively. Importantly, the algorithm detected 84.2% of all fundus images with definite signs of early or late AMD. Overall, 94.3% of healthy fundus images were classified correctly. Conclusions Our deep learning algoritm revealed a weighted κ outperforming human graders in the AREDS study and is suitable to classify AMD fundus images in other datasets using individuals >55 years of age. KW - Senile Makuladegeneration KW - Krankheitsverlauf KW - Mustererkennung KW - Maschinelles Lernen Y1 - 2018 U6 - https://doi.org/10.1016/j.ophtha.2018.02.037 N1 - Corresponding authors: Bernhard H. F. Weber, University of Regensburg, and Christoph Palm VL - 125 IS - 9 SP - 1410 EP - 1420 PB - Elsevier ER - TY - JOUR A1 - Palm, Christoph T1 - Color Texture Classification by Integrative Co-Occurrence Matrices JF - Pattern Recognition N2 - Integrative Co-occurrence matrices are introduced as novel features for color texture classification. The extended Co-occurrence notation allows the comparison between integrative and parallel color texture concepts. The information profit of the new matrices is shown quantitatively using the Kolmogorov distance and by extensive classification experiments on two datasets. Applying them to the RGB and the LUV color space the combined color and intensity textures are studied and the existence of intensity independent pure color patterns is demonstrated. The results are compared with two baselines: gray-scale texture analysis and color histogram analysis. The novel features improve the classification results up to 20% and 32% for the first and second baseline, respectively. KW - Color texture KW - Co-occurrence matrix KW - Integrative features KW - KolmogKorov distance KW - Image classification Y1 - 2004 U6 - https://doi.org/10.1016/j.patcog.2003.09.010 VL - 37 IS - 5 SP - 965 EP - 976 ER - TY - JOUR A1 - Beyer, Thomas A1 - Weigert, Markus A1 - Quick, Harald H. A1 - Pietrzyk, Uwe A1 - Vogt, Florian A1 - Palm, Christoph A1 - Antoch, Gerald A1 - Müller, Stefan P. A1 - Bockisch, Andreas T1 - MR-based attenuation correction for torso-PET/MR imaging BT - pitfalls in mapping MR to CT data JF - European Journal of Nuclear Medicine and Molecular Imaging N2 - Purpose MR-based attenuation correction (AC) will become an integral part of combined PET/MR systems. Here, we propose a toolbox to validate MR-AC of clinical PET/MRI data sets. Methods Torso scans of ten patients were acquired on a combined PET/CT and on a 1.5-T MRI system. MR-based attenuation data were derived from the CT following MR–CT image co-registration and subsequent histogram matching. PET images were reconstructed after CT- (PET/CT) and MR-based AC (PET/MRI). Lesion-to-background (L/B) ratios were estimated on PET/CT and PET/MRI. Results MR–CT histogram matching leads to a mean voxel intensity difference in the CT- and MR-based attenuation images of 12% (max). Mean differences between PET/MRI and PET/CT were 19% (max). L/B ratios were similar except for the lung where local misregistration and intensity transformation leads to a biased PET/MRI. Conclusion Our toolbox can be used to study pitfalls in MR-AC. We found that co-registration accuracy and pixel value transformation determine the accuracy of PET/MRI. KW - PET/MRI KW - PET/CT KW - Attenuation correction KW - Kernspintomografie KW - Positronen-Emissions-Tomografie KW - Schwächung Y1 - 2008 U6 - https://doi.org/10.1007/s00259-008-0734-0 VL - 35 IS - 6 SP - 1142 EP - 1146 ER - TY - JOUR A1 - Becker, Johanna Sabine A1 - Matusch, Andreas A1 - Becker, Julia Susanne A1 - Wu, Bei A1 - Palm, Christoph A1 - Becker, Albert Johann A1 - Salber, Dagmar T1 - Mass spectrometric imaging (MSI) of metals using advanced BrainMet techniques for biomedical research JF - International Journal of Mass Spectrometry N2 - Mass spectrometric imaging (MSI) is a young innovative analytical technique and combines different fields of advanced mass spectrometry and biomedical research with the aim to provide maps of elements and molecules, complexes or fragments. Especially essential metals such as zinc, copper, iron and manganese play a functional role in signaling, metabolism and homeostasis of the cell. Due to the high degree of spatial organization of metals in biological systems their distribution analysis is of key interest in life sciences. We have developed analytical techniques termed BrainMet using laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) imaging to measure the distribution of trace metals in biological tissues for biomedical research and feasibility studies—including bioaccumulation and bioavailability studies, ecological risk assessment and toxicity studies in humans and other organisms. The analytical BrainMet techniques provide quantitative images of metal distributions in brain tissue slices which can be combined with other imaging modalities such as photomicrography of native or processed tissue (histochemistry, immunostaining) and autoradiography or with in vivo techniques such as positron emission tomography or magnetic resonance tomography. Prospective and instrumental developments will be discussed concerning the development of the metalloprotein microscopy using a laser microdissection (LMD) apparatus for specific sample introduction into an inductively coupled plasma mass spectrometer (LMD-ICP-MS) or an application of the near field effect in LA-ICP-MS (NF-LA-ICP-MS). These nano-scale mass spectrometric techniques provide improved spatial resolution down to the single cell level. KW - Bioimaging KW - Brain tissue KW - Laser ablation inductively coupled plasma mass spectrometry KW - Laser microdissection inductively coupled plasma mass spectrometry KW - Metals KW - Metallomics KW - Nano-LA-ICP-MS KW - Tumour KW - Massenspektrometrie KW - Bildgebendes Verfahren KW - Metalle KW - Metallproteide KW - Gehirn Y1 - 2011 U6 - https://doi.org/10.1016/j.ijms.2011.01.015 VL - 307 IS - 1-3 SP - 3 EP - 15 PB - eLSEVIER CY - Elsevier ER - TY - CHAP A1 - Metzler, V. A1 - Aach, T. A1 - Palm, Christoph A1 - Lehmann, Thomas M. T1 - Texture Classification of Graylevel Images by Multiscale Cross-Co-Occurrence Matrices T2 - Proceedings 15th International Conference on Pattern Recognition (ICPR-2000) N2 - Local gray level dependencies of natural images can be modelled by means of co-occurrence matrices containing joint probabilities of gray-level pairs. Texture, however, is a resolution-dependent phenomenon and hence, classification depends on the chosen scale. Since there is no optimal scale for all textures we employ a multiscale approach that acquires textural features at several scales. Thus linear and nonlinear scale-spaces are analyzed by multiscale co-occurrence matrices that describe the statistical behavior of a texture in scale-space. Classification is then performed on the basis of texture features taken from the individual scale with the highest discriminatory power. By considering cross-scale occurrences of gray level pairs, the impact of filters on the feature is described and used for classification of natural textures. This novel method was found to improve classification rates of the common co-occurrence matrix approach on standard textures significantly. Y1 - 2000 U6 - https://doi.org/10.1109/ICPR.2000.906133 SP - 549 EP - 552 ER - TY - GEN A1 - Axer, Markus A1 - Axer, Hubertus A1 - Palm, Christoph A1 - Gräßel, David A1 - Zilles, Karl A1 - Pietrzyk, Uwe T1 - Visualization of Nerve Fibre Orientation in the Visual Cortex of the Human Brain by Means of Polarized Light T2 - Biomedizinische Technik KW - Sehrinde KW - Nervenfaser KW - Ausrichtung KW - Visualisierung KW - Polarisiertes Licht Y1 - 2007 VL - 52 IS - Suppl. SP - 1569048-041 ER - TY - JOUR A1 - Scheppach, Markus W. A1 - Rauber, David A1 - Stallhofer, Johannes A1 - Muzalyova, Anna A1 - Otten, Vera A1 - Manzeneder, Carolin A1 - Schwamberger, Tanja A1 - Wanzl, Julia A1 - Schlottmann, Jakob A1 - Tadic, Vidan A1 - Probst, Andreas A1 - Schnoy, Elisabeth A1 - Römmele, Christoph A1 - Fleischmann, Carola A1 - Meinikheim, Michael A1 - Miller, Silvia A1 - Märkl, Bruno A1 - Stallmach, Andreas A1 - Palm, Christoph A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - Detection of duodenal villous atrophy on endoscopic images using a deep learning algorithm JF - Gastrointestinal Endoscopy N2 - Background and aims Celiac disease with its endoscopic manifestation of villous atrophy is underdiagnosed worldwide. The application of artificial intelligence (AI) for the macroscopic detection of villous atrophy at routine esophagogastroduodenoscopy may improve diagnostic performance. Methods A dataset of 858 endoscopic images of 182 patients with villous atrophy and 846 images from 323 patients with normal duodenal mucosa was collected and used to train a ResNet 18 deep learning model to detect villous atrophy. An external data set was used to test the algorithm, in addition to six fellows and four board certified gastroenterologists. Fellows could consult the AI algorithm’s result during the test. From their consultation distribution, a stratification of test images into “easy” and “difficult” was performed and used for classified performance measurement. Results External validation of the AI algorithm yielded values of 90 %, 76 %, and 84 % for sensitivity, specificity, and accuracy, respectively. Fellows scored values of 63 %, 72 % and 67 %, while the corresponding values in experts were 72 %, 69 % and 71 %, respectively. AI consultation significantly improved all trainee performance statistics. While fellows and experts showed significantly lower performance for “difficult” images, the performance of the AI algorithm was stable. Conclusion In this study, an AI algorithm outperformed endoscopy fellows and experts in the detection of villous atrophy on endoscopic still images. AI decision support significantly improved the performance of non-expert endoscopists. The stable performance on “difficult” images suggests a further positive add-on effect in challenging cases. KW - celiac disease KW - villous atrophy KW - endoscopy detection KW - artificial intelligence Y1 - 2023 U6 - https://doi.org/10.1016/j.gie.2023.01.006 PB - Elsevier ER - TY - JOUR A1 - Knoedler, Leonard A1 - Baecher, Helena A1 - Kauke-Navarro, Martin A1 - Prantl, Lukas A1 - Machens, Hans-Günther A1 - Scheuermann, Philipp A1 - Palm, Christoph A1 - Baumann, Raphael A1 - Kehrer, Andreas A1 - Panayi, Adriana C. A1 - Knoedler, Samuel T1 - Towards a Reliable and Rapid Automated Grading System in Facial Palsy Patients: Facial Palsy Surgery Meets Computer Science JF - Journal of Clinical Medicine N2 - Background: Reliable, time- and cost-effective, and clinician-friendly diagnostic tools are cornerstones in facial palsy (FP) patient management. Different automated FP grading systems have been developed but revealed persisting downsides such as insufficient accuracy and cost-intensive hardware. We aimed to overcome these barriers and programmed an automated grading system for FP patients utilizing the House and Brackmann scale (HBS). Methods: Image datasets of 86 patients seen at the Department of Plastic, Hand, and Reconstructive Surgery at the University Hospital Regensburg, Germany, between June 2017 and May 2021, were used to train the neural network and evaluate its accuracy. Nine facial poses per patient were analyzed by the algorithm. Results: The algorithm showed an accuracy of 100%. Oversampling did not result in altered outcomes, while the direct form displayed superior accuracy levels when compared to the modular classification form (n = 86; 100% vs. 99%). The Early Fusion technique was linked to improved accuracy outcomes in comparison to the Late Fusion and sequential method (n = 86; 100% vs. 96% vs. 97%). Conclusions: Our automated FP grading system combines high-level accuracy with cost- and time-effectiveness. Our algorithm may accelerate the grading process in FP patients and facilitate the FP surgeon’s workflow. Y1 - 2022 U6 - https://doi.org/10.3390/jcm11174998 VL - 11 IS - 17 PB - MDPI CY - Basel ER - TY - INPR A1 - Rueckert, Tobias A1 - Rueckert, Daniel A1 - Palm, Christoph T1 - Methods and datasets for segmentation of minimally invasive surgical instruments in endoscopic images and videos: A review of the state of the art N2 - In the field of computer- and robot-assisted minimally invasive surgery, enormous progress has been made in recent years based on the recognition of surgical instruments in endoscopic images. Especially the determination of the position and type of the instruments is of great interest here. Current work involves both spatial and temporal information with the idea, that the prediction of movement of surgical tools over time may improve the quality of final segmentations. The provision of publicly available datasets has recently encouraged the development of new methods, mainly based on deep learning. In this review, we identify datasets used for method development and evaluation, as well as quantify their frequency of use in the literature. We further present an overview of the current state of research regarding the segmentation and tracking of minimally invasive surgical instruments in endoscopic images. The paper focuses on methods that work purely visually without attached markers of any kind on the instruments, taking into account both single-frame segmentation approaches as well as those involving temporal information. A discussion of the reviewed literature is provided, highlighting existing shortcomings and emphasizing available potential for future developments. The publications considered were identified through the platforms Google Scholar, Web of Science, and PubMed. The search terms used were "instrument segmentation", "instrument tracking", "surgical tool segmentation", and "surgical tool tracking" and result in 408 articles published between 2015 and 2022 from which 109 were included using systematic selection criteria. Y1 - 2023 U6 - https://doi.org/10.48550/arXiv.2304.13014 ER - TY - GEN ED - Deserno, Thomas M. ED - Handels, Heinz ED - Maier, Andreas ED - Maier-Hein, Klaus ED - Palm, Christoph ED - Tolxdorff, Thomas T1 - Bildverarbeitung für die Medizin 2023 BT - Proceedings, German Workshop on Medical Image Computing, Braunschweig, July 2-4, 2023 N2 - Seit mehr als 25 Jahren ist der Workshop "Bildverarbeitung für die Medizin" als erfolgreiche Veranstaltung etabliert. Ziel ist auch 2023 wieder die Darstellung aktueller Forschungsergebnisse und die Vertiefung der Gespräche zwischen Wissenschaftlern, Industrie und Anwendern. Die Beiträge dieses Bandes - viele davon in englischer Sprache - umfassen alle Bereiche der medizinischen Bildverarbeitung, insbesondere die Bildgebung und -akquisition, Segmentierung und Analyse, Visualisierung und Animation, computerunterstützte Diagnose sowie bildgestützte Therapieplanung und Therapie. Hierbei kommen Methoden des maschinelles Lernens, der biomechanischen Modellierung sowie der Validierung und Qualitätssicherung zum Einsatz. KW - Machine Learning KW - Medical Image Computing KW - Bildverarbeitung KW - Computerunterstützte Medizin KW - Bildgebendes Verfahren KW - Bildanalyse KW - Visualisierung KW - Deep Learning Y1 - 2023 SN - 978-3-658-41656-0 U6 - https://doi.org/10.1007/978-3-658-41657-7 SN - 1431-472X PB - Springer Vieweg CY - Wiesbaden ER - TY - CHAP A1 - Mendel, Robert A1 - Rauber, David A1 - Palm, Christoph T1 - Exploring the Effects of Contrastive Learning on Homogeneous Medical Image Data T2 - Bildverarbeitung für die Medizin 2023: Proceedings, German Workshop on Medical Image Computing, July 2– 4, 2023, Braunschweig N2 - We investigate contrastive learning in a multi-task learning setting classifying and segmenting early Barrett’s cancer. How can contrastive learning be applied in a domain with few classes and low inter-class and inter-sample variance, potentially enabling image retrieval or image attribution? We introduce a data sampling strategy that mines per-lesion data for positive samples and keeps a queue of the recent projections as negative samples. We propose a masking strategy for the NT-Xent loss that keeps the negative set pure and removes samples from the same lesion. We show cohesion and uniqueness improvements of the proposed method in feature space. The introduction of the auxiliary objective does not affect the performance but adds the ability to indicate similarity between lesions. Therefore, the approach could enable downstream auto-documentation tasks on homogeneous medical image data. Y1 - 2023 U6 - https://doi.org/10.1007/978-3-658-41657-7 SP - 128 EP - 13 PB - Springer Vieweg CY - Wiesbaden ER - TY - GEN A1 - Zellmer, Stephan A1 - Rauber, David A1 - Probst, Andreas A1 - Weber, Tobias A1 - Nagl, Sandra A1 - Römmele, Christoph A1 - Schnoy, Elisabeth A1 - Palm, Christoph A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - Verwendung künstlicher Intelligenz bei der Detektion der Papilla duodeni major T2 - Zeitschrift für Gastroenterologie N2 - Einleitung Die Endoskopische Retrograde Cholangiopankreatikographie (ERCP) ist der Goldstandard in der Diagnostik und Therapie von Erkrankungen des pankreatobiliären Trakts. Jedoch ist sie technisch sehr anspruchsvoll und weist eine vergleichsweise hohe Komplikationsrate auf. Ziele  In der vorliegenden Machbarkeitsstudie soll geprüft werden, ob mithilfe eines Deep-learning-Algorithmus die Papille und das Ostium zuverlässig detektiert werden können und somit für Endoskopiker mit geringer Erfahrung ein geeignetes Hilfsmittel, insbesondere für die Ausbildungssituation, darstellen könnten. Methodik Wir betrachteten insgesamt 606 Bilddatensätze von 65 Patienten. In diesen wurde sowohl die Papilla duodeni major als auch das Ostium segmentiert. Anschließend wurde eine neuronales Netz mittels eines Deep-learning-Algorithmus trainiert. Außerdem erfolgte eine 5-fache Kreuzvaldierung. Ergebnisse Bei einer 5-fachen Kreuzvaldierung auf den 606 gelabelten Daten konnte für die Klasse Papille eine F1-Wert von 0,7908, eine Sensitivität von 0,7943 und eine Spezifität von 0,9785 erreicht werden, für die Klasse Ostium eine F1-Wert von 0,5538, eine Sensitivität von 0,5094 und eine Spezifität von 0,9970 (vgl. [Tab. 1]). Unabhängig von der Klasse zeigte sich gemittelt (Klasse Papille und Klasse Ostium) ein F1-Wert von 0,6673, eine Sensitivität von 0,6519 und eine Spezifität von 0,9877 (vgl. [Tab. 2]). Schlussfolgerung  In vorliegende Machbarkeitsstudie konnte das neuronale Netz die Papilla duodeni major mit einer hohen Sensitivität und sehr hohen Spezifität identifizieren. Bei der Detektion des Ostiums war die Sensitivität deutlich geringer. Zukünftig soll das das neuronale Netz mit mehr Daten trainiert werden. Außerdem ist geplant, den Algorithmus auch auf Videos anzuwenden. Somit könnte langfristig ein geeignetes Hilfsmittel für die ERCP etabliert werden. KW - Künstliche Intelligenz Y1 - 2023 UR - https://www.thieme-connect.de/products/ejournals/abstract/10.1055/s-0043-1772000 U6 - https://doi.org/10.1055/s-0043-1772000 VL - 61 IS - 08 PB - Thieme CY - Stuttgart ER - TY - JOUR A1 - Hammer, Simone A1 - Nunes, Danilo Weber A1 - Hammer, Michael A1 - Zeman, Florian A1 - Akers, Michael A1 - Götz, Andrea A1 - Balla, Annika A1 - Doppler, Michael Christian A1 - Fellner, Claudia A1 - Da Platz Batista Silva, Natascha A1 - Thurn, Sylvia A1 - Verloh, Niklas A1 - Stroszczynski, Christian A1 - Wohlgemuth, Walter Alexander A1 - Palm, Christoph A1 - Uller, Wibke T1 - Deep learning-based differentiation of peripheral high-flow and low-flow vascular malformations in T2-weighted short tau inversion recovery MRI JF - Clinical hemorheology and microcirculation N2 - BACKGROUND Differentiation of high-flow from low-flow vascular malformations (VMs) is crucial for therapeutic management of this orphan disease. OBJECTIVE A convolutional neural network (CNN) was evaluated for differentiation of peripheral vascular malformations (VMs) on T2-weighted short tau inversion recovery (STIR) MRI. METHODS 527 MRIs (386 low-flow and 141 high-flow VMs) were randomly divided into training, validation and test set for this single-center study. 1) Results of the CNN's diagnostic performance were compared with that of two expert and four junior radiologists. 2) The influence of CNN's prediction on the radiologists' performance and diagnostic certainty was evaluated. 3) Junior radiologists' performance after self-training was compared with that of the CNN. RESULTS Compared with the expert radiologists the CNN achieved similar accuracy (92% vs. 97%, p = 0.11), sensitivity (80% vs. 93%, p = 0.16) and specificity (97% vs. 100%, p = 0.50). In comparison to the junior radiologists, the CNN had a higher specificity and accuracy (97% vs. 80%, p <  0.001; 92% vs. 77%, p <  0.001). CNN assistance had no significant influence on their diagnostic performance and certainty. After self-training, the junior radiologists' specificity and accuracy improved and were comparable to that of the CNN. CONCLUSIONS Diagnostic performance of the CNN for differentiating high-flow from low-flow VM was comparable to that of expert radiologists. CNN did not significantly improve the simulated daily practice of junior radiologists, self-training was more effective. KW - magnetic resonance imaging KW - deep learning KW - Vascular malformation Y1 - 2024 U6 - https://doi.org/10.3233/CH-232071 SP - 1 EP - 15 PB - IOP Press ET - Pre-press ER - TY - GEN A1 - Rückert, Tobias A1 - Rieder, Maximilian A1 - Rauber, David A1 - Xiao, Michel A1 - Humolli, Eg A1 - Feussner, Hubertus A1 - Wilhelm, Dirk A1 - Palm, Christoph T1 - Augmenting instrument segmentation in video sequences of minimally invasive surgery by synthetic smoky frames T2 - International Journal of Computer Assisted Radiology and Surgery KW - Surgical instrument segmentation KW - smoke simulation KW - unpaired image-to-image translation KW - robot-assisted surgery Y1 - 2023 U6 - https://doi.org/10.1007/s11548-023-02878-2 VL - 18 IS - Suppl 1 SP - S54 EP - S56 PB - Springer Nature ER - TY - JOUR A1 - Kolev, Kalin A1 - Kirchgeßner, Norbert A1 - Houben, Sebastian A1 - Csiszár, Agnes A1 - Rubner, Wolfgang A1 - Palm, Christoph A1 - Eiben, Björn A1 - Merkel, Rudolf A1 - Cremers, Daniel T1 - A variational approach to vesicle membrane reconstruction from fluorescence imaging JF - Pattern Recognition N2 - Biological applications like vesicle membrane analysis involve the precise segmentation of 3D structures in noisy volumetric data, obtained by techniques like magnetic resonance imaging (MRI) or laser scanning microscopy (LSM). Dealing with such data is a challenging task and requires robust and accurate segmentation methods. In this article, we propose a novel energy model for 3D segmentation fusing various cues like regional intensity subdivision, edge alignment and orientation information. The uniqueness of the approach consists in the definition of a new anisotropic regularizer, which accounts for the unbalanced slicing of the measured volume data, and the generalization of an efficient numerical scheme for solving the arising minimization problem, based on linearization and fixed-point iteration. We show how the proposed energy model can be optimized globally by making use of recent continuous convex relaxation techniques. The accuracy and robustness of the presented approach are demonstrated by evaluating it on multiple real data sets and comparing it to alternative segmentation methods based on level sets. Although the proposed model is designed with focus on the particular application at hand, it is general enough to be applied to a variety of different segmentation tasks. KW - 3D segmentation KW - Convex optimization KW - Vesicle membrane analysis KW - Fluorescence imaging KW - Dreidimensionale Bildverarbeitung KW - Bildsegmentierung KW - Konvexe Optimierung Y1 - 2011 U6 - https://doi.org/10.1016/j.patcog.2011.04.019 VL - 44 IS - 12 SP - 2944 EP - 2958 PB - Elsevier ER - TY - CHAP A1 - Palm, Christoph ED - Byrne, Michael F. ED - Parsa, Nasim ED - Greenhill, Alexandra T. ED - Chahal, Daljeet ED - Ahmad, Omer ED - Bargci, Ulas T1 - History, Core Concepts, and Role of AI in Clinical Medicine T2 - AI in Clinical Medicine: A Practical Guide for Healthcare Professionals N2 - The field of AI is characterized by robust promises, astonishing successes, and remarkable breakthroughs. AI will play a major role in all domains of clinical medicine, but the role of AI in relation to the physician is not yet completely determined. The term artificial intelligence or AI is broad, and several different terms are used in this context that must be organized and demystified. This chapter will review the key concepts and methods of AI, and will introduce some of the different roles for AI in relation to the physician. KW - artificial intelligence KW - healthcare Y1 - 2023 SN - 978-1-119-79064-8 U6 - https://doi.org/10.1002/9781119790686.ch5 SP - 49 EP - 55 PB - Wiley ET - 1. Aufl. ER - TY - JOUR A1 - Rückert, Tobias A1 - Rückert, Daniel A1 - Palm, Christoph T1 - Methods and datasets for segmentation of minimally invasive surgical instruments in endoscopic images and videos: A review of the state of the art JF - Computers in Biology and Medicine N2 - In the field of computer- and robot-assisted minimally invasive surgery, enormous progress has been made in recent years based on the recognition of surgical instruments in endoscopic images and videos. In particular, the determination of the position and type of instruments is of great interest. Current work involves both spatial and temporal information, with the idea that predicting the movement of surgical tools over time may improve the quality of the final segmentations. The provision of publicly available datasets has recently encouraged the development of new methods, mainly based on deep learning. In this review, we identify and characterize datasets used for method development and evaluation and quantify their frequency of use in the literature. We further present an overview of the current state of research regarding the segmentation and tracking of minimally invasive surgical instruments in endoscopic images and videos. The paper focuses on methods that work purely visually, without markers of any kind attached to the instruments, considering both single-frame semantic and instance segmentation approaches, as well as those that incorporate temporal information. The publications analyzed were identified through the platforms Google Scholar, Web of Science, and PubMed. The search terms used were “instrument segmentation”, “instrument tracking”, “surgical tool segmentation”, and “surgical tool tracking”, resulting in a total of 741 articles published between 01/2015 and 07/2023, of which 123 were included using systematic selection criteria. A discussion of the reviewed literature is provided, highlighting existing shortcomings and emphasizing the available potential for future developments. KW - Deep Learning KW - Minimal-invasive Chirurgie KW - Bildsegmentierung KW - Surgical instrument segmentation KW - Surgical instrument tracking KW - Spatio-temporal information KW - Endoscopic surgery KW - Robot-assisted surgery Y1 - 2024 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:898-opus4-69830 N1 - Corresponding author: Tobias Rückert N1 - Corrigendum unter: https://opus4.kobv.de/opus4-oth-regensburg/frontdoor/index/index/docId/7033 VL - 169 PB - Elsevier CY - Amsterdam ER - TY - JOUR A1 - Ruewe, Marc A1 - Eigenberger, Andreas A1 - Klein, Silvan A1 - von Riedheim, Antonia A1 - Gugg, Christine A1 - Prantl, Lukas A1 - Palm, Christoph A1 - Weiherer, Maximilian A1 - Zeman, Florian A1 - Anker, Alexandra T1 - Precise Monitoring of Returning Sensation in Digital Nerve Lesions by 3-D Imaging: A Proof-of-Concept Study JF - Plastic and Reconstructive Surgery N2 - Digital nerve lesions result in a loss of tactile sensation reflected by an anesthetic area (AA) at the radial or ulnar aspect of the respective digit. Yet, available tools to monitor the recovery of tactile sense have been criticized for their lack of validity. However, the precise quantification of AA dynamics by three-dimensional (3-D) imaging could serve as an accurate surrogate to monitor recovery following digital nerve repair. For validation, AAs were marked on digits of healthy volunteers to simulate the AA of an impaired cutaneous innervation. Three dimensional models were composed from raw images that had been acquired with a 3-D camera (Vectra H2) to precisely quantify relative AA for each digit (3-D models, n= 80). Operator properties varied regarding individual experience in 3-D imaging and image processing. Additionally, the concept was applied in a clinical case study. Images taken by experienced photographers were rated better quality (p< 0.001) and needed less processing time (p= 0.020). Quantification of the relative AA was neither altered significantly by experience levels of the photographer (p= 0.425) nor the image assembler (p= 0.749). The proposed concept allows precise and reliable surface quantification of digits and can be performed consistently without relevant distortion by lack of examiner experience. Routine 3-D imaging of the AA has the great potential to provide visual evidence of various returning states of sensation and to convert sensory nerve recovery into a metric variable with high responsiveness to temporal progress. KW - 3D imaging Y1 - 2023 U6 - https://doi.org/10.1097/PRS.0000000000010456 SN - 1529-4242 VL - 152 IS - 4 SP - 670e EP - 674e PB - Lippincott Williams & Wilkins CY - Philadelphia, Pa. ER - TY - GEN A1 - Scheppach, Markus A1 - Rauber, David A1 - Stallhofer, Johannes A1 - Muzalyova, Anna A1 - Otten, Vera A1 - Manzeneder, Carolin A1 - Schwamberger, Tanja A1 - Wanzl, Julia A1 - Schlottmann, Jakob A1 - Tadic, Vidan A1 - Probst, Andreas A1 - Schnoy, Elisabeth A1 - Römmele, Christoph A1 - Fleischmann, Carola A1 - Meinikheim, Michael A1 - Miller, Silvia A1 - Märkl, Bruno A1 - Palm, Christoph A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - Performance comparison of a deep learning algorithm with endoscopists in the detection of duodenal villous atrophy (VA) T2 - Endoscopy N2 - Aims  VA is an endoscopic finding of celiac disease (CD), which can easily be missed if pretest probability is low. In this study, we aimed to develop an artificial intelligence (AI) algorithm for the detection of villous atrophy on endoscopic images. Methods 858 images from 182 patients with VA and 846 images from 323 patients with normal duodenal mucosa were used for training and internal validation of an AI algorithm (ResNet18). A separate dataset was used for external validation, as well as determination of detection performance of experts, trainees and trainees with AI support. According to the AI consultation distribution, images were stratified into “easy” and “difficult”. Results Internal validation showed 82%, 85% and 84% for sensitivity, specificity and accuracy. External validation showed 90%, 76% and 84%. The algorithm was significantly more sensitive and accurate than trainees, trainees with AI support and experts in endoscopy. AI support in trainees was associated with significantly improved performance. While all endoscopists showed significantly lower detection for “difficult” images, AI performance remained stable. Conclusions The algorithm outperformed trainees and experts in sensitivity and accuracy for VA detection. The significant improvement with AI support suggests a potential clinical benefit. Stable performance of the algorithm in “easy” and “difficult” test images may indicate an advantage in macroscopically challenging cases. Y1 - 2023 U6 - https://doi.org/10.1055/s-0043-1765421 VL - 55 IS - S02 PB - Thieme ER - TY - JOUR A1 - Weiherer, Maximilian A1 - Eigenberger, Andreas A1 - Egger, Bernhard A1 - Brébant, Vanessa A1 - Prantl, Lukas A1 - Palm, Christoph T1 - Learning the shape of female breasts: an open-access 3D statistical shape model of the female breast built from 110 breast scans JF - The Visual Computer N2 - We present the Regensburg Breast Shape Model (RBSM)—a 3D statistical shape model of the female breast built from 110 breast scans acquired in a standing position, and the first publicly available. Together with the model, a fully automated, pairwise surface registration pipeline used to establish dense correspondence among 3D breast scans is introduced. Our method is computationally efficient and requires only four landmarks to guide the registration process. A major challenge when modeling female breasts from surface-only 3D breast scans is the non-separability of breast and thorax. In order to weaken the strong coupling between breast and surrounding areas, we propose to minimize the variance outside the breast region as much as possible. To achieve this goal, a novel concept called breast probability masks (BPMs) is introduced. A BPM assigns probabilities to each point of a 3D breast scan, telling how likely it is that a particular point belongs to the breast area. During registration, we use BPMs to align the template to the target as accurately as possible inside the breast region and only roughly outside. This simple yet effective strategy significantly reduces the unwanted variance outside the breast region, leading to better statistical shape models in which breast shapes are quite well decoupled from the thorax. The RBSM is thus able to produce a variety of different breast shapes as independently as possible from the shape of the thorax. Our systematic experimental evaluation reveals a generalization ability of 0.17 mm and a specificity of 2.8 mm. To underline the expressiveness of the proposed model, we finally demonstrate in two showcase applications how the RBSM can be used for surgical outcome simulation and the prediction of a missing breast from the remaining one. Our model is available at https://www.rbsm.re-mic.de/. KW - Statistical shape model KW - Non-rigid surface registration KW - Breast imaging KW - Surgical outcome simulation KW - Breast reconstruction surgery Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:898-opus4-30506 N1 - Corresponding author: Christoph Palm N1 - Zugehörige arXiv-Publikation: https://opus4.kobv.de/opus4-oth-regensburg/frontdoor/index/index/docId/2023 VL - 39 IS - 4 SP - 1597 EP - 1616 PB - Springer Nature ER - TY - JOUR A1 - Mang, Andreas A1 - Schnabel, Julia A. A1 - Crum, William R. A1 - Modat, Marc A1 - Camara-Rey, Oscar A1 - Palm, Christoph A1 - Caseiras, Gisele Brasil A1 - Jäger, H. Rolf A1 - Ourselin, Sébastien A1 - Buzug, Thorsten M. A1 - Hawkes, David J. T1 - Consistency of parametric registration in serial MRI studies of brain tumor progression JF - International Journal of Computer Assisted Radiology and Surgery N2 - Object The consistency of parametric registration in multi-temporal magnetic resonance (MR) imaging studies was evaluated. Materials and methods Serial MRI scans of adult patients with a brain tumor (glioma) were aligned by parametric registration. The performance of low-order spatial alignment (6/9/12 degrees of freedom) of different 3D serial MR-weighted images is evaluated. A registration protocol for the alignment of all images to one reference coordinate system at baseline is presented. Registration results were evaluated for both, multimodal intra-timepoint and mono-modal multi-temporal registration. The latter case might present a challenge to automatic intensity-based registration algorithms due to ill-defined correspondences. The performance of our algorithm was assessed by testing the inverse registration consistency. Four different similarity measures were evaluated to assess consistency. Results Careful visual inspection suggests that images are well aligned, but their consistency may be imperfect. Sub-voxel inconsistency within the brain was found for allsimilarity measures used for parametric multi-temporal registration. T1-weighted images were most reliable for establishing spatial correspondence between different timepoints. Conclusions The parametric registration algorithm is feasible for use in this application. The sub-voxel resolution mean displacement error of registration transformations demonstrates that the algorithm converges to an almost identical solution for forward and reverse registration. KW - Inverse registration consistency KW - Parametric serial MR image registration KW - Tumor disease progression KW - Kernspintomografie KW - Registrierung KW - Hirntumor Y1 - 2008 U6 - https://doi.org/10.1007/s11548-008-0234-5 VL - 3 IS - 3-4 SP - 201 EP - 211 ER - TY - CHAP A1 - Chang, Ching-Sheng A1 - Lin, Jin-Fa A1 - Lee, Ming-Ching A1 - Palm, Christoph ED - Tolxdorff, Thomas ED - Deserno, Thomas M. ED - Handels, Heinz ED - Maier, Andreas ED - Maier-Hein, Klaus H. ED - Palm, Christoph T1 - Semantic Lung Segmentation Using Convolutional Neural Networks T2 - Bildverarbeitung für die Medizin 2020. Algorithmen - Systeme - Anwendungen. Proceedings des Workshops vom 15. bis 17. März 2020 in Berlin N2 - Chest X-Ray (CXR) images as part of a non-invasive diagnosis method are commonly used in today’s medical workflow. In traditional methods, physicians usually use their experience to interpret CXR images, however, there is a large interobserver variance. Computer vision may be used as a standard for assisted diagnosis. In this study, we applied an encoder-decoder neural network architecture for automatic lung region detection. We compared a three-class approach (left lung, right lung, background) and a two-class approach (lung, background). The differentiation of left and right lungs as direct result of a semantic segmentation on basis of neural nets rather than post-processing a lung-background segmentation is done here for the first time. Our evaluation was done on the NIH Chest X-ray dataset, from which 1736 images were extracted and manually annotated. We achieved 94:9% mIoU and 92% mIoU as segmentation quality measures for the two-class-model and the three-class-model, respectively. This result is very promising for the segmentation of lung regions having the simultaneous classification of left and right lung in mind. KW - Neuronales Netz KW - Segmentierung KW - Brustkorb KW - Deep Learning KW - Encoder-Decoder Network KW - Chest X-Ray Y1 - 2020 SN - 978-3-658-29266-9 U6 - https://doi.org/10.1007/978-3-658-29267-6_17 SP - 75 EP - 80 PB - Springer Vieweg CY - Wiesbaden ER - TY - CHAP A1 - Weber, Joachim A1 - Brawanski, Alexander A1 - Palm, Christoph T1 - Parallelization of FSL-Fast segmentation of MRI brain data T2 - 58. Jahrestagung der Deutschen Gesellschaft für Medizinische Informatik, Biometrie und Epidemiologie e.V. (GMDS 2013), Lübeck, 01.-05.09.2013 Y1 - 2013 U6 - https://doi.org/10.3205/13gmds261 N1 - Meeting Abstract IS - DocAbstr. 329 PB - German Medical Science GMS Publishing House CY - Düsseldorf ER - TY - JOUR A1 - Neuschaefer-Rube, C. A1 - Lehmann, Thomas M. A1 - Palm, Christoph A1 - Bredno, J. A1 - Klajman, S. A1 - Spitzer, Klaus T1 - 3D-Visualisierung glottaler Abduktionsbewegungen JF - Aktuelle phoniatrisch-pädaudiologische Aspekte Y1 - 2001 SN - 3-922766-76-5 VL - 2001/2002 IS - 9 SP - 58 EP - 61 PB - Median ER - TY - JOUR A1 - Palm, Christoph A1 - Dehnhardt, Markus A1 - Vieten, Andrea A1 - Pietrzyk, Uwe A1 - Bauer, Andreas A1 - Zilles, Karl T1 - 3D rat brain tumors JF - Naunyn-Schmiedebergs Archives of Pharmacology Y1 - 2005 VL - 371 IS - R103 ER - TY - GEN A1 - Scheppach, Markus W. A1 - Mendel, Robert A1 - Probst, Andreas A1 - Meinikheim, Michael A1 - Palm, Christoph A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - Artificial Intelligence (AI) – assisted vessel and tissue recognition during third space endoscopy (Smart ESD) T2 - Zeitschrift für Gastroenterologie N2 - Clinical setting  Third space procedures such as endoscopic submucosal dissection (ESD) and peroral endoscopic myotomy (POEM) are complex minimally invasive techniques with an elevated risk for operator-dependent adverse events such as bleeding and perforation. This risk arises from accidental dissection into the muscle layer or through submucosal blood vessels as the submucosal cutting plane within the expanding resection site is not always apparent. Deep learning algorithms have shown considerable potential for the detection and characterization of gastrointestinal lesions. So-called AI – clinical decision support solutions (AI-CDSS) are commercially available for polyp detection during colonoscopy. Until now, these computer programs have concentrated on diagnostics whereas an AI-CDSS for interventional endoscopy has not yet been introduced. We aimed to develop an AI-CDSS („Smart ESD“) for real-time intra-procedural detection and delineation of blood vessels, tissue structures and endoscopic instruments during third-space endoscopic procedures. Characteristics of Smart ESD  An AI-CDSS was invented that delineates blood vessels, tissue structures and endoscopic instruments during third-space endoscopy in real-time. The output can be displayed by an overlay over the endoscopic image with different modes of visualization, such as a color-coded semitransparent area overlay, or border tracing (demonstration video). Hereby the optimal layer for dissection can be visualized, which is close above or directly at the muscle layer, depending on the applied technique (ESD or POEM). Furthermore, relevant blood vessels (thickness> 1mm) are delineated. Spatial proximity between the electrosurgical knife and a blood vessel triggers a warning signal. By this guidance system, inadvertent dissection through blood vessels could be averted. Technical specifications  A DeepLabv3+ neural network architecture with KSAC and a 101-layer ResNeSt backbone was used for the development of Smart ESD. It was trained and validated with 2565 annotated still images from 27 full length third-space endoscopic videos. The annotation classes were blood vessel, submucosal layer, muscle layer, electrosurgical knife and endoscopic instrument shaft. A test on a separate data set yielded an intersection over union (IoU) of 68%, a Dice Score of 80% and a pixel accuracy of 87%, demonstrating a high overlap between expert and AI segmentation. Further experiments on standardized video clips showed a mean vessel detection rate (VDR) of 85% with values of 92%, 70% and 95% for POEM, rectal ESD and esophageal ESD respectively. False positive measurements occurred 0.75 times per minute. 7 out of 9 vessels which caused intraprocedural bleeding were caught by the algorithm, as well as both vessels which required hemostasis via hemostatic forceps. Future perspectives  Smart ESD performed well for vessel and tissue detection and delineation on still images, as well as on video clips. During a live demonstration in the endoscopy suite, clinical applicability of the innovation was examined. The lag time for processing of the live endoscopic image was too short to be visually detectable for the interventionist. Even though the algorithm could not be applied during actual dissection by the interventionist, Smart ESD appeared readily deployable during visual assessment by ESD experts. Therefore, we plan to conduct a clinical trial in order to obtain CE-certification of the algorithm. This new technology may improve procedural safety and speed, as well as training of modern minimally invasive endoscopic resection techniques. KW - Artificial Intelligence KW - Medical Image Computing KW - Endoscopy KW - Bildgebendes Verfahren KW - Medizin KW - Künstliche Intelligenz KW - Endoskopie Y1 - 2022 U6 - https://doi.org/10.1055/s-0042-1755110 VL - 60 IS - 08 PB - Georg Thieme Verlag CY - Stuttgart ER - TY - GEN A1 - Meinikheim, Michael A1 - Mendel, Robert A1 - Scheppach, Markus W. A1 - Probst, Andreas A1 - Prinz, Friederike A1 - Schwamberger, Tanja A1 - Schlottmann, Jakob A1 - Gölder, Stefan Karl A1 - Walter, Benjamin A1 - Steinbrück, Ingo A1 - Palm, Christoph A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - INFLUENCE OF AN ARTIFICIAL INTELLIGENCE (AI) BASED DECISION SUPPORT SYSTEM (DSS) ON THE DIAGNOSTIC PERFORMANCE OF NON-EXPERTS IN BARRETT´S ESOPHAGUS RELATED NEOPLASIA (BERN) T2 - Endoscopy N2 - Aims Barrett´s esophagus related neoplasia (BERN) is difficult to detect and characterize during endoscopy, even for expert endoscopists. We aimed to assess the add-on effect of an Artificial Intelligence (AI) algorithm (Barrett-Ampel) as a decision support system (DSS) for non-expert endoscopists in the evaluation of Barrett’s esophagus (BE) and BERN. Methods Twelve videos with multimodal imaging white light (WL), narrow-band imaging (NBI), texture and color enhanced imaging (TXI) of histologically confirmed BE and BERN were assessed by expert and non-expert endoscopists. For each video, endoscopists were asked to identify the area of BERN and decide on the biopsy spot. Videos were assessed by the AI algorithm and regions of BERN were highlighted in real-time by a transparent overlay. Finally, endoscopists were shown the AI videos and asked to either confirm or change their initial decision based on the AI support. Results Barrett-Ampel correctly identified all areas of BERN, irrespective of the imaging modality (WL, NBI, TXI), but misinterpreted two inflammatory lesions (Accuracy=75%). Expert endoscopists had a similar performance (Accuracy=70,8%), while non-experts had an accuracy of 58.3%. When AI was implemented as a DSS, non-expert endoscopists improved their diagnostic accuracy to 75%. Conclusions AI may have the potential to support non-expert endoscopists in the assessment of videos of BE and BERN. Limitations of this study include the low number of videos used. Randomized clinical trials in a real-life setting should be performed to confirm these results. KW - Artificial Intelligence KW - Barrett's Esophagus KW - Speiseröhrenkrankheit KW - Künstliche Intelligenz KW - Diagnose Y1 - 2022 U6 - https://doi.org/10.1055/s-00000012 VL - 54 IS - S 01 SP - S39 PB - Thieme ER - TY - JOUR A1 - Meinikheim, Michael A1 - Mendel, Robert A1 - Palm, Christoph A1 - Probst, Andreas A1 - Muzalyova, Anna A1 - Scheppach, Markus Wolfgang A1 - Nagl, Sandra A1 - Schnoy, Elisabeth A1 - Römmele, Christoph A1 - Schulz, Dominik Andreas Helmut Otto A1 - Schlottmann, Jakob A1 - Prinz, Friederike A1 - Rauber, David A1 - Rückert, Tobias A1 - Matsumura, Tomoaki A1 - Fernández-Esparrach, Glòria A1 - Parsa, Nasim A1 - Byrne, Michael F A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - Effect of AI on performance of endoscopists to detect Barrett neoplasia: A Randomized Tandem Trial JF - Endoscopy N2 - Background and study aims To evaluate the effect of an AI-based clinical decision support system (AI) on the performance and diagnostic confidence of endoscopists during the assessment of Barrett's esophagus (BE). Patients and Methods Ninety-six standardized endoscopy videos were assessed by 22 endoscopists from 12 different centers with varying degrees of BE experience. The assessment was randomized into two video sets: Group A (review first without AI and second with AI) and group B (review first with AI and second without AI). Endoscopists were required to evaluate each video for the presence of Barrett's esophagus-related neoplasia (BERN) and then decide on a spot for a targeted biopsy. After the second assessment, they were allowed to change their clinical decision and confidence level. Results AI had a standalone sensitivity, specificity, and accuracy of 92.2%, 68.9%, and 81.6%, respectively. Without AI, BE experts had an overall sensitivity, specificity, and accuracy of 83.3%, 58.1 and 71.5%, respectively. With AI, BE nonexperts showed a significant improvement in sensitivity and specificity when videos were assessed a second time with AI (sensitivity 69.7% (95% CI, 65.2% - 74.2%) to 78.0% (95% CI, 74.0% - 82.0%); specificity 67.3% (95% CI, 62.5% - 72.2%) to 72.7% (95 CI, 68.2% - 77.3%). In addition, the diagnostic confidence of BE nonexperts improved significantly with AI. Conclusion BE nonexperts benefitted significantly from the additional AI. BE experts and nonexperts remained below the standalone performance of AI, suggesting that there may be other factors influencing endoscopists to follow or discard AI advice. Y1 - 2024 U6 - https://doi.org/10.1055/a-2296-5696 SN - 0013-726X N1 - Accepted Manuscript PB - Georg Thieme Verlag ER - TY - JOUR A1 - Scheppach, Markus W. A1 - Mendel, Robert A1 - Probst, Andreas A1 - Meinikheim, Michael A1 - Palm, Christoph A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - ARTIFICIAL INTELLIGENCE (AI) – ASSISTED VESSEL AND TISSUE RECOGNITION IN THIRD-SPACE ENDOSCOPY JF - Endoscopy N2 - Aims Third-space endoscopy procedures such as endoscopic submucosal dissection (ESD) and peroral endoscopic myotomy (POEM) are complex interventions with elevated risk of operator-dependent adverse events, such as intra-procedural bleeding and perforation. We aimed to design an artificial intelligence clinical decision support solution (AI-CDSS, “Smart ESD”) for the detection and delineation of vessels, tissue structures, and instruments during third-space endoscopy procedures. Methods Twelve full-length third-space endoscopy videos were extracted from the Augsburg University Hospital database. 1686 frames were annotated for the following categories: Submucosal layer, blood vessels, electrosurgical knife and endoscopic instrument. A DeepLabv3+neural network with a 101-layer ResNet backbone was trained and validated internally. Finally, the ability of the AI system to detect visible vessels during ESD and POEM was determined on 24 separate video clips of 7 to 46 seconds duration and showing 33 predefined vessels. These video clips were also assessed by an expert in third-space endoscopy. Results Smart ESD showed a vessel detection rate (VDR) of 93.94%, while an average of 1.87 false positive signals were recorded per minute. VDR of the expert endoscopist was 90.1% with no false positive findings. On the internal validation data set using still images, the AI system demonstrated an Intersection over Union (IoU), mean Dice score and pixel accuracy of 63.47%, 76.18% and 86.61%, respectively. Conclusions This is the first AI-CDSS aiming to mitigate operator-dependent limitations during third-space endoscopy. Further clinical trials are underway to better understand the role of AI in such procedures. KW - Artificial Intelligence KW - Third-Space Endoscopy KW - Smart ESD Y1 - 2022 U6 - https://doi.org/10.1055/s-0042-1745037 VL - 54 IS - S01 SP - S175 PB - Thieme ER - TY - JOUR A1 - Hartmann, Robin A1 - Nieberle, Felix A1 - Palm, Christoph A1 - Brébant, Vanessa A1 - Prantl, Lukas A1 - Kuehle, Reinald A1 - Reichert, Torsten E. A1 - Taxis, Juergen A1 - Ettl, Tobias T1 - Utility of Smartphone-based Three-dimensional Surface Imaging for Digital Facial Anthropometry JF - JPRAS Open N2 - Background The utilization of three-dimensional (3D) surface imaging for facial anthropometry is a significant asset for patients undergoing maxillofacial surgery. Notably, there have been recent advancements in smartphone technology that enable 3D surface imaging. In this study, anthropometric assessments of the face were performed using a smartphone and a sophisticated 3D surface imaging system. Methods 30 healthy volunteers (15 females and 15 males) were included in the study. An iPhone 14 Pro (Apple Inc., USA) using the application 3D Scanner App (Laan Consulting Corp., USA) and the Vectra M5 (Canfield Scientific, USA) were employed to create 3D surface models. For each participant, 19 anthropometric measurements were conducted on the 3D surface models. Subsequently, the anthropometric measurements generated by the two approaches were compared. The statistical techniques employed included the paired t-test, paired Wilcoxon signed-rank test, Bland–Altman analysis, and calculation of the intraclass correlation coefficient (ICC). Results All measurements showed excellent agreement between smartphone-based and Vectra M5-based measurements (ICC between 0.85 and 0.97). Statistical analysis revealed no statistically significant differences in the central tendencies for 17 of the 19 linear measurements. Despite the excellent agreement found, Bland–Altman analysis revealed that the 95% limits of agreement between the two methods exceeded ±3 mm for the majority of measurements. Conclusion Digital facial anthropometry using smartphones can serve as a valuable supplementary tool for surgeons, enhancing their communication with patients. However, the proposed data suggest that digital facial anthropometry using smartphones may not yet be suitable for certain diagnostic purposes that require high accuracy. KW - Three-dimensional surface imaging KW - Stereophotogrammetry KW - Smartphone-based surface imaging KW - Digital anthropometry KW - Facial anthropometry Y1 - 2024 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:898-opus4-70348 VL - 39 SP - 330 EP - 343 PB - Elsevier ER - TY - JOUR A1 - Tobias, Rueckert A1 - Daniel, Rueckert A1 - Palm, Christoph T1 - Corrigendum to “Methods and datasets for segmentation of minimally invasive surgical instruments in endoscopic images and videos: A review of the state of the art” [Comput. Biol. Med. 169 (2024) 107929] JF - Computers in Biology and Medicine N2 - The authors regret that the SAR-RARP50 dataset is missing from the description of publicly available datasets presented in Chapter 4. Y1 - 2024 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:898-opus4-70337 N1 - Aufsatz unter: https://opus4.kobv.de/opus4-oth-regensburg/frontdoor/index/index/docId/6983 PB - Elsevier ER - TY - JOUR A1 - Römmele, Christoph A1 - Mendel, Robert A1 - Barrett, Caroline A1 - Kiesl, Hans A1 - Rauber, David A1 - Rückert, Tobias A1 - Kraus, Lisa A1 - Heinkele, Jakob A1 - Dhillon, Christine A1 - Grosser, Bianca A1 - Prinz, Friederike A1 - Wanzl, Julia A1 - Fleischmann, Carola A1 - Nagl, Sandra A1 - Schnoy, Elisabeth A1 - Schlottmann, Jakob A1 - Dellon, Evan S. A1 - Messmann, Helmut A1 - Palm, Christoph A1 - Ebigbo, Alanna T1 - An artificial intelligence algorithm is highly accurate for detecting endoscopic features of eosinophilic esophagitis JF - Scientific Reports N2 - The endoscopic features associated with eosinophilic esophagitis (EoE) may be missed during routine endoscopy. We aimed to develop and evaluate an Artificial Intelligence (AI) algorithm for detecting and quantifying the endoscopic features of EoE in white light images, supplemented by the EoE Endoscopic Reference Score (EREFS). An AI algorithm (AI-EoE) was constructed and trained to differentiate between EoE and normal esophagus using endoscopic white light images extracted from the database of the University Hospital Augsburg. In addition to binary classification, a second algorithm was trained with specific auxiliary branches for each EREFS feature (AI-EoE-EREFS). The AI algorithms were evaluated on an external data set from the University of North Carolina, Chapel Hill (UNC), and compared with the performance of human endoscopists with varying levels of experience. The overall sensitivity, specificity, and accuracy of AI-EoE were 0.93 for all measures, while the AUC was 0.986. With additional auxiliary branches for the EREFS categories, the AI algorithm (AI-EoEEREFS) performance improved to 0.96, 0.94, 0.95, and 0.992 for sensitivity, specificity, accuracy, and AUC, respectively. AI-EoE and AI-EoE-EREFS performed significantly better than endoscopy beginners and senior fellows on the same set of images. An AI algorithm can be trained to detect and quantify endoscopic features of EoE with excellent performance scores. The addition of the EREFS criteria improved the performance of the AI algorithm, which performed significantly better than endoscopists with a lower or medium experience level. KW - Artificial Intelligence KW - Smart Endoscopy KW - eosinophilic esophagitis Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:898-opus4-46928 VL - 12 PB - Nature Portfolio CY - London ER - TY - GEN A1 - Römmele, Christoph A1 - Mendel, Robert A1 - Rauber, David A1 - Rückert, Tobias A1 - Byrne, Michael F. A1 - Palm, Christoph A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - Endoscopic Diagnosis of Eosinophilic Esophagitis Using a deep Learning Algorithm T2 - Endoscopy N2 - Aims Eosinophilic esophagitis (EoE) is easily missed during endoscopy, either because physicians are not familiar with its endoscopic features or the morphologic changes are too subtle. In this preliminary paper, we present the first attempt to detect EoE in endoscopic white light (WL) images using a deep learning network (EoE-AI). Methods 401 WL images of eosinophilic esophagitis and 871 WL images of normal esophageal mucosa were evaluated. All images were assessed for the Endoscopic Reference score (EREFS) (edema, rings, exudates, furrows, strictures). Images with strictures were excluded. EoE was defined as the presence of at least 15 eosinophils per high power field on biopsy. A convolutional neural network based on the ResNet architecture with several five-fold cross-validation runs was used. Adding auxiliary EREFS-classification branches to the neural network allowed the inclusion of the scores as optimization criteria during training. EoE-AI was evaluated for sensitivity, specificity, and F1-score. In addition, two human endoscopists evaluated the images. Results EoE-AI showed a mean sensitivity, specificity, and F1 of 0.759, 0.976, and 0.834 respectively, averaged over the five distinct cross-validation runs. With the EREFS-augmented architecture, a mean sensitivity, specificity, and F1-score of 0.848, 0.945, and 0.861 could be demonstrated respectively. In comparison, the two human endoscopists had an average sensitivity, specificity, and F1-score of 0.718, 0.958, and 0.793. Conclusions To the best of our knowledge, this is the first application of deep learning to endoscopic images of EoE which were also assessed after augmentation with the EREFS-score. The next step is the evaluation of EoE-AI using an external dataset. We then plan to assess the EoE-AI tool on endoscopic videos, and also in real-time. This preliminary work is encouraging regarding the ability for AI to enhance physician detection of EoE, and potentially to do a true “optical biopsy” but more work is needed. KW - Eosinophilic Esophagitis KW - Endoscopy KW - Deep Learning Y1 - 2021 U6 - https://doi.org/10.1055/s-0041-1724274 VL - 53 IS - S 01 PB - Georg Thieme Verlag CY - Stuttgart ER - TY - GEN A1 - Scheppach, Markus W. A1 - Rauber, David A1 - Mendel, Robert A1 - Palm, Christoph A1 - Byrne, Michael F. A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - Detection Of Celiac Disease Using A Deep Learning Algorithm T2 - Endoscopy N2 - Aims Celiac disease (CD) is a complex condition caused by an autoimmune reaction to ingested gluten. Due to its polymorphic manifestation and subtle endoscopic presentation, the diagnosis is difficult and thus the disorder is underreported. We aimed to use deep learning to identify celiac disease on endoscopic images of the small bowel. Methods Patients with small intestinal histology compatible with CD (MARSH classification I-III) were extracted retrospectively from the database of Augsburg University hospital. They were compared to patients with no clinical signs of CD and histologically normal small intestinal mucosa. In a first step MARSH III and normal small intestinal mucosa were differentiated with the help of a deep learning algorithm. For this, the endoscopic white light images were divided into five equal-sized subsets. We avoided splitting the images of one patient into several subsets. A ResNet-50 model was trained with the images from four subsets and then validated with the remaining subset. This process was repeated for each subset, such that each subset was validated once. Sensitivity, specificity, and harmonic mean (F1) of the algorithm were determined. Results The algorithm showed values of 0.83, 0.88, and 0.84 for sensitivity, specificity, and F1, respectively. Further data showing a comparison between the detection rate of the AI model and that of experienced endoscopists will be available at the time of the upcoming conference. Conclusions We present the first clinical report on the use of a deep learning algorithm for the detection of celiac disease using endoscopic images. Further evaluation on an external data set, as well as in the detection of CD in real-time, will follow. However, this work at least suggests that AI can assist endoscopists in the endoscopic diagnosis of CD, and ultimately may be able to do a true optical biopsy in live-time. KW - Celiac Disease KW - Deep Learning Y1 - 2021 U6 - https://doi.org/10.1055/s-0041-1724970 N1 - Digital poster exhibition VL - 53 IS - S 01 PB - Georg Thieme Verlag CY - Stuttgart ER - TY - JOUR A1 - Ebigbo, Alanna A1 - Mendel, Robert A1 - Probst, Andreas A1 - Meinikheim, Michael A1 - Byrne, Michael F. A1 - Messmann, Helmut A1 - Palm, Christoph T1 - Multimodal imaging for detection and segmentation of Barrett’s esophagus-related neoplasia using artificial intelligence JF - Endoscopy N2 - The early diagnosis of cancer in Barrett’s esophagus is crucial for improving the prognosis. However, identifying Barrett’s esophagus-related neoplasia (BERN) is challenging, even for experts [1]. Four-quadrant biopsies may improve the detection of neoplasia, but they can be associated with sampling errors. The application of artificial intelligence (AI) to the assessment of Barrett’s esophagus could improve the diagnosis of BERN, and this has been demonstrated in both preclinical and clinical studies [2] [3]. In this video demonstration, we show the accurate detection and delineation of BERN in two patients ([Video 1]). In part 1, the AI system detects a mucosal cancer about 20 mm in size and accurately delineates the lesion in both white-light and narrow-band imaging. In part 2, a small island of BERN with high-grade dysplasia is detected and delineated in white-light, narrow-band, and texture and color enhancement imaging. The video shows the results using a transparent overlay of the mucosal cancer in real time as well as a full segmentation preview. Additionally, the optical flow allows for the assessment of endoscope movement, something which is inversely related to the reliability of the AI prediction. We demonstrate that multimodal imaging can be applied to the AI-assisted detection and segmentation of even small focal lesions in real time. KW - Video KW - Artificial Intelligence KW - Multimodal Imaging Y1 - 2022 U6 - https://doi.org/10.1055/a-1704-7885 VL - 54 IS - 10 PB - Georg Thieme Verlag CY - Stuttgart ET - E-Video ER - TY - INPR A1 - Mendel, Robert A1 - Rueckert, Tobias A1 - Wilhelm, Dirk A1 - Rueckert, Daniel A1 - Palm, Christoph T1 - Motion-Corrected Moving Average: Including Post-Hoc Temporal Information for Improved Video Segmentation N2 - Real-time computational speed and a high degree of precision are requirements for computer-assisted interventions. Applying a segmentation network to a medical video processing task can introduce significant inter-frame prediction noise. Existing approaches can reduce inconsistencies by including temporal information but often impose requirements on the architecture or dataset. This paper proposes a method to include temporal information in any segmentation model and, thus, a technique to improve video segmentation performance without alterations during training or additional labeling. With Motion-Corrected Moving Average, we refine the exponential moving average between the current and previous predictions. Using optical flow to estimate the movement between consecutive frames, we can shift the prior term in the moving-average calculation to align with the geometry of the current frame. The optical flow calculation does not require the output of the model and can therefore be performed in parallel, leading to no significant runtime penalty for our approach. We evaluate our approach on two publicly available segmentation datasets and two proprietary endoscopic datasets and show improvements over a baseline approach. KW - Deep Learning KW - Video KW - Segmentation Y1 - 2024 U6 - https://doi.org/10.48550/arXiv.2403.03120 ER - TY - CHAP A1 - Mendel, Robert A1 - Ebigbo, Alanna A1 - Probst, Andreas A1 - Messmann, Helmut A1 - Palm, Christoph T1 - Barrett’s Esophagus Analysis Using Convolutional Neural Networks T2 - Bildverarbeitung für die Medizin 2017; Algorithmen - Systeme - Anwendungen. Proceedings des Workshops vom 12. bis 14. März 2017 in Heidelberg N2 - We propose an automatic approach for early detection of adenocarcinoma in the esophagus. High-definition endoscopic images (50 cancer, 50 Barrett) are partitioned into a dataset containing approximately equal amounts of patches showing cancerous and non-cancerous regions. A deep convolutional neural network is adapted to the data using a transfer learning approach. The final classification of an image is determined by at least one patch, for which the probability being a cancer patch exceeds a given threshold. The model was evaluated with leave one patient out cross-validation. With sensitivity and specificity of 0.94 and 0.88, respectively, our findings improve recently published results on the same image data base considerably. Furthermore, the visualization of the class probabilities of each individual patch indicates, that our approach might be extensible to the segmentation domain. KW - Speiseröhrenkrebs KW - Diagnose KW - Maschinelles Lernen KW - Bilderkennung KW - Automatische Klassifikation Y1 - 2017 U6 - https://doi.org/10.1007/978-3-662-54345-0_23 SP - 80 EP - 85 PB - Springer CY - Berlin ER - TY - CHAP A1 - Rauber, David A1 - Mendel, Robert A1 - Scheppach, Markus W. A1 - Ebigbo, Alanna A1 - Messmann, Helmut A1 - Palm, Christoph T1 - Analysis of Celiac Disease with Multimodal Deep Learning T2 - Bildverarbeitung für die Medizin 2022: Proceedings, German Workshop on Medical Image Computing, Heidelberg, June 26-28, 2022 N2 - Celiac disease is an autoimmune disorder caused by gluten that results in an inflammatory response of the small intestine.We investigated whether celiac disease can be detected using endoscopic images through a deep learning approach. The results show that additional clinical parameters can improve the classification accuracy. In this work, we distinguished between healthy tissue and Marsh III, according to the Marsh score system. We first trained a baseline network to classify endoscopic images of the small bowel into these two classes and then augmented the approach with a multimodality component that took the antibody status into account. KW - Deep Learning KW - Endoscopy Y1 - 2022 U6 - https://doi.org/10.1007/978-3-658-36932-3_25 SP - 115 EP - 120 PB - Springer Vieweg CY - Wiesbaden ER - TY - GEN A1 - Meinikheim, Michael A1 - Mendel, Robert A1 - Scheppach, Markus W. A1 - Probst, Andreas A1 - Prinz, Friederike A1 - Schwamberger, Tanja A1 - Schlottmann, Jakob A1 - Gölder, Stefan Karl A1 - Walter, Benjamin A1 - Steinbrück, Ingo A1 - Palm, Christoph A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - Einsatz von künstlicher Intelligenz (KI) als Entscheidungsunterstützungssystem für nicht-Experten bei der Beurteilung von Barrett-Ösophagus assoziierten Neoplasien (BERN) T2 - Zeitschrift für Gastroenterologie N2 - Einleitung Die sichere Detektion und Charakterisierung von Barrett-Ösophagus assoziierten Neoplasien (BERN) stellt selbst für erfahrene Endoskopiker eine Herausforderung dar. Ziel Ziel dieser Studie ist es, den Add-on Effekt eines künstlichen Intelligenz (KI) Systems (Barrett-Ampel) als Entscheidungsunterstüzungssystem für Endoskopiker ohne Expertise bei der Untersuchung von BERN zu evaluieren. Material und Methodik Zwölf Videos in „Weißlicht“ (WL), „narrow-band imaging“ (NBI) und „texture and color enhanced imaging“ (TXI) von histologisch bestätigten Barrett-Metaplasien oder BERN wurden von Experten und Untersuchern ohne Barrett-Expertise evaluiert. Die Probanden wurden dazu aufgefordert in den Videos auftauchende BERN zu identifizieren und gegebenenfalls die optimale Biopsiestelle zu markieren. Unser KI-System wurde demselben Test unterzogen, wobei dieses BERN in Echtzeit segmentierte und farblich von umliegendem Epithel differenzierte. Anschließend wurden den Probanden die Videos mit zusätzlicher KI-Unterstützung gezeigt. Basierend auf dieser neuen Information, wurden die Probanden zu einer Reevaluation ihrer initialen Beurteilung aufgefordert. Ergebnisse Die „Barrett-Ampel“ identifizierte unabhängig von den verwendeten Darstellungsmodi (WL, NBI, TXI) alle BERN. Zwei entzündlich veränderte Läsionen wurden fehlinterpretiert (Genauigkeit=75%). Während Experten vergleichbare Ergebnisse erzielten (Genauigkeit=70,8%), hatten Endoskopiker ohne Expertise bei der Beurteilung von Barrett-Metaplasien eine Genauigkeit von lediglich 58,3%. Wurden die nicht-Experten allerdings von unserem KI-System unterstützt, erreichten diese eine Genauigkeit von 75%. Zusammenfassung Unser KI-System hat das Potential als Entscheidungsunterstützungssystem bei der Differenzierung zwischen Barrett-Metaplasie und BERN zu fungieren und so Endoskopiker ohne entsprechende Expertise zu assistieren. Eine Limitation dieser Studie ist die niedrige Anzahl an eingeschlossenen Videos. Um die Ergebnisse dieser Studie zu bestätigen, müssen randomisierte kontrollierte klinische Studien durchgeführt werden. KW - Barrett-Ösophagus KW - Künstliche Intelligenz Y1 - 2022 U6 - https://doi.org/10.1055/s-0042-1745653 VL - 60 IS - 4 SP - 251 PB - Thieme CY - Stuttgart ER - TY - GEN A1 - Scheppach, Markus W. A1 - Mendel, Robert A1 - Probst, Andreas A1 - Meinikheim, Michael A1 - Palm, Christoph A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - Intraprozedurale Strukturerkennung bei Third-Space Endoskopie mithilfe eines Deep-Learning Algorithmus T2 - Zeitschrift für Gastroenterologie N2 - Einleitung Third-Space Interventionen wie die endoskopische Submukosadissektion (ESD) und die perorale endoskopische Myotomie (POEM) sind technisch anspruchsvoll und mit einem erhöhten Risiko für intraprozedurale Komplikationen wie Blutung oder Perforation assoziiert. Moderne Computerprogramme zur Unterstützung bei diagnostischen Entscheidungen werden unter Einsatz von künstlicher Intelligenz (KI) in der Endoskopie bereits erfolgreich eingesetzt. Ziel der vorliegenden Arbeit war es, relevante anatomische Strukturen mithilfe eines Deep-Learning Algorithmus zu detektieren und segmentieren, um die Sicherheit und Anwendbarkeit von ESD und POEM zu erhöhen. Methoden Zwölf Videoaufnahmen in voller Länge von Third-Space Endoskopien wurden aus der Datenbank des Universitätsklinikums Augsburg extrahiert. 1686 Einzelbilder wurden für die Kategorien Submukosa, Blutgefäß, Dissektionsmesser und endoskopisches Instrument annotiert und segmentiert. Mit diesem Datensatz wurde ein DeepLabv3+neuronales Netzwerk auf der Basis eines ResNet mit 101 Schichten trainiert und intern anhand der Parameter Intersection over Union (IoU), Dice Score und Pixel Accuracy validiert. Die Fähigkeit des Algorithmus zur Gefäßdetektion wurde anhand von 24 Videoclips mit einer Spieldauer von 7 bis 46 Sekunden mit 33 vordefinierten Gefäßen evaluiert. Anhand dieses Tests wurde auch die Gefäßdetektionsrate eines Experten in der Third-Space Endoskopie ermittelt. Ergebnisse Der Algorithmus zeigte eine Gefäßdetektionsrate von 93,94% mit einer mittleren Rate an falsch positiven Signalen von 1,87 pro Minute. Die Gefäßdetektionsrate des Experten lag bei 90,1% ohne falsch positive Ergebnisse. In der internen Validierung an Einzelbildern wurde eine IoU von 63,47%, ein mittlerer Dice Score von 76,18% und eine Pixel Accuracy von 86,61% ermittelt. Zusammenfassung Dies ist der erste KI-Algorithmus, der für den Einsatz in der therapeutischen Endoskopie entwickelt wurde. Präliminäre Ergebnisse deuten auf eine mit Experten vergleichbare Detektion von Gefäßen während der Untersuchung hin. Weitere Untersuchungen sind nötig, um die Leistung des Algorithmus im Vergleich zum Experten genauer zu eruieren sowie einen möglichen klinischen Nutzen zu ermitteln. KW - Deep Learning KW - Third-Space Endoscopy Y1 - 2022 U6 - https://doi.org/10.1055/s-0042-1745652 VL - 60 IS - 04 PB - Thieme CY - Stuttgart ER - TY - GEN A1 - Ebigbo, Alanna A1 - Mendel, Robert A1 - Tziatzios, Georgios A1 - Probst, Andreas A1 - Palm, Christoph A1 - Messmann, Helmut T1 - Real-Time Diagnosis of an Early Barrett's Carcinoma using Artificial Intelligence (AI) - Video Case Demonstration T2 - Endoscopy N2 - Introduction We present a clinical case showing the real-time detection, characterization and delineation of an early Barrett’s cancer using AI. Patients and methods A 70-year old patient with a long-segment Barrett’s esophagus (C5M7) was assessed with an AI algorithm. Results The AI system detected a 10 mm focal lesion and AI characterization predicted cancer with a probability of >90%. After ESD resection, histopathology showed mucosal adenocarcinoma (T1a (m), R0) confirming AI diagnosis. Conclusion We demonstrate the real-time AI detection, characterization and delineation of a small and early mucosal Barrett’s cancer. KW - Artificial Intelligence KW - Barrett's Carcinoma KW - Speiseröhrenkrebs KW - Künstliche Intelligenz KW - Diagnose Y1 - 2020 U6 - https://doi.org/10.1055/s-0040-1704075 VL - 52 IS - S 01 PB - Thieme ER - TY - INPR A1 - Allan, Max A1 - Kondo, Satoshi A1 - Bodenstedt, Sebastian A1 - Leger, Stefan A1 - Kadkhodamohammadi, Rahim A1 - Luengo, Imanol A1 - Fuentes, Felix A1 - Flouty, Evangello A1 - Mohammed, Ahmed A1 - Pedersen, Marius A1 - Kori, Avinash A1 - Alex, Varghese A1 - Krishnamurthi, Ganapathy A1 - Rauber, David A1 - Mendel, Robert A1 - Palm, Christoph A1 - Bano, Sophia A1 - Saibro, Guinther A1 - Shih, Chi-Sheng A1 - Chiang, Hsun-An A1 - Zhuang, Juntang A1 - Yang, Junlin A1 - Iglovikov, Vladimir A1 - Dobrenkii, Anton A1 - Reddiboina, Madhu A1 - Reddy, Anubhav A1 - Liu, Xingtong A1 - Gao, Cong A1 - Unberath, Mathias A1 - Kim, Myeonghyeon A1 - Kim, Chanho A1 - Kim, Chaewon A1 - Kim, Hyejin A1 - Lee, Gyeongmin A1 - Ullah, Ihsan A1 - Luna, Miguel A1 - Park, Sang Hyun A1 - Azizian, Mahdi A1 - Stoyanov, Danail A1 - Maier-Hein, Lena A1 - Speidel, Stefanie T1 - 2018 Robotic Scene Segmentation Challenge N2 - In 2015 we began a sub-challenge at the EndoVis workshop at MICCAI in Munich using endoscope images of exvivo tissue with automatically generated annotations from robot forward kinematics and instrument CAD models. However, the limited background variation and simple motion rendered the dataset uninformative in learning about which techniques would be suitable for segmentation in real surgery. In 2017, at the same workshop in Quebec we introduced the robotic instrument segmentation dataset with 10 teams participating in the challenge to perform binary, articulating parts and type segmentation of da Vinci instruments. This challenge included realistic instrument motion and more complex porcine tissue as background and was widely addressed with modfications on U-Nets and other popular CNN architectures [1]. In 2018 we added to the complexity by introducing a set of anatomical objects and medical devices to the segmented classes. To avoid over-complicating the challenge, we continued with porcine data which is dramatically simpler than human tissue due to the lack of fatty tissue occluding many organs. KW - Minimally invasive surgery KW - Robotic KW - Minimal-invasive Chirurgie KW - Robotik Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:898-opus4-50049 UR - https://arxiv.org/abs/2001.11190 ER - TY - GEN A1 - Meinikheim, Michael A1 - Mendel, Robert A1 - Probst, Andreas A1 - Scheppach, Markus W. A1 - Nagl, Sandra A1 - Schnoy, Elisabeth A1 - Römmele, Christoph A1 - Prinz, Friederike A1 - Schlottmann, Jakob A1 - Messmann, Helmut A1 - Palm, Christoph A1 - Ebigbo, Alanna T1 - Einfluss von Künstlicher Intelligenz auf die Performance von niedergelassenen Gastroenterolog:innen bei der Beurteilung von Barrett-Ösophagus T2 - Zeitschrift für Gastroenterologie N2 - Einleitung  Die Differenzierung zwischen nicht dysplastischem Barrett-Ösophagus (NDBE) und mit Barrett-Ösophagus assoziierten Neoplasien (BERN) während der endoskopischen Inspektion erfordert viel Expertise. Die frühe Diagnosestellung ist wichtig für die weitere Prognose des Barrett-Karzinoms. In Deutschland werden Patient:innen mit einem Barrett-Ösophagus (BE) in der Regel im niedergelassenen Sektor überwacht. Ziele  Ziel ist es, den Einfluss von einem auf Künstlicher Intelligenz (KI) basierenden klinischen Entscheidungsunterstützungssystems (CDSS) auf die Performance von niedergelassenen Gastroenterolog:innen (NG) bei der Evaluation von Barrett-Ösophagus (BE) zu untersuchen. Methodik  Es erfolgte die prospektive Sammlung von 96 unveränderten hochauflösenden Videos mit Fällen von Patient:innen mit histologisch bestätigtem NDBE und BERN. Alle eingeschlossenen Fälle enthielten mindestens zwei der folgenden Darstellungsmethoden: HD-Weißlichtendoskopie, Narrow Band Imaging oder Texture and Color Enhancement Imaging. Sechs NG von sechs unterschiedlichen Praxen wurden als Proband:innen eingeschlossen. Es erfolgte eine permutierte Block-Randomisierung der Videofälle in entweder Gruppe A oder Gruppe B. Gruppe A implizierte eine Evaluation des Falls durch Proband:innen zunächst ohne KI und anschließend mit KI als CDSS. In Gruppe B erfolgte die Evaluation in umgekehrter Reihenfolge. Anschließend erfolgte eine zufällige Wiedergabe der so entstandenen Subgruppen im Rahmen des Tests. Ergebnis  In diesem Test konnte ein von uns entwickeltes KI-System (Barrett-Ampel) eine Sensitivität von 92,2%, eine Spezifität von 68,9% und eine Accuracy von 81,3% erreichen. Mit der Hilfe von KI verbesserte sich die Sensitivität der NG von 64,1% auf 71,2% (p<0,001) und die Accuracy von 66,3% auf 70,8% (p=0,006) signifikant. Eine signifikante Verbesserung dieser Parameter zeigte sich ebenfalls, wenn die Proband:innen die Fälle zunächst ohne KI evaluierten (Gruppe A). Wurde der Fall jedoch als Erstes mit der Hilfe von KI evaluiert (Gruppe B), blieb die Performance nahezu konstant. Schlussfolgerung  Es konnte ein performantes KI-System zur Evaluation von BE entwickelt werden. NG verbessern sich bei der Evaluation von BE durch den Einsatz von KI. KW - Barrett-Ösophagus KW - Künstliche Intelligenz Y1 - 2023 UR - https://www.thieme-connect.de/products/ejournals/abstract/10.1055/s-0043-1771711 U6 - https://doi.org/10.1055/s-0043-1771711 VL - 61 IS - 8 PB - Thieme CY - Stuttgart ER - TY - GEN A1 - Scheppach, Markus W. A1 - Mendel, Robert A1 - Probst, Andreas A1 - Nagl, Sandra A1 - Meinikheim, Michael A1 - Yip, Hon Chi A1 - Lau, Louis Ho Shing A1 - Chiu, Philip Wai Yan A1 - Palm, Christoph A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - Effekt eines Künstliche Intelligenz (KI) – Algorithmus auf die Gefäßdetektion bei third space Endoskopien T2 - Zeitschrift für Gastroenterologie N2 - Einleitung  Third space Endoskopieprozeduren wie die endoskopische Submukosadissektion (ESD) und die perorale endoskopische Myotomie (POEM) sind technisch anspruchsvoll und gehen mit untersucherabhängigen Komplikationen wie Blutungen und Perforationen einher. Grund hierfür ist die unabsichtliche Durchschneidung von submukosalen Blutgefäßen ohne präemptive Koagulation. Ziele Die Forschungsfrage, ob ein KI-Algorithmus die intraprozedurale Gefäßerkennung bei ESD und POEM unterstützen und damit Komplikationen wie Blutungen verhindern könnte, erscheint in Anbetracht des erfolgreichen Einsatzes von KI bei der Erkennung von Kolonpolypen interessant. Methoden  Auf 5470 Einzelbildern von 59 third space Endoscopievideos wurden submukosale Blutgefäße annotiert. Zusammen mit weiteren 179.681 nicht-annotierten Bildern wurde ein DeepLabv3+neuronales Netzwerk mit dem ECMT-Verfahren für semi-supervised learning trainiert, um Blutgefäße in Echtzeit erkennen zu können. Für die Evaluation wurde ein Videotest mit 101 Videoclips aus 15 vom Trainingsdatensatz separaten Prozeduren mit 200 vordefinierten Gefäßen erstellt. Die Gefäßdetektionsrate, -zeit und -dauer, definiert als der Prozentsatz an Einzelbildern eines Videos bezogen auf den Goldstandard, auf denen ein definiertes Gefäß erkannt wurde, wurden erhoben. Acht erfahrene Endoskopiker wurden mithilfe dieses Videotests im Hinblick auf Gefäßdetektion getestet, wobei eine Hälfte der Videos nativ, die andere Hälfte nach Markierung durch den KI-Algorithmus angesehen wurde. Ergebnisse  Der mittlere Dice Score des Algorithmus für Blutgefäße war 68%. Die mittlere Gefäßdetektionsrate im Videotest lag bei 94% (96% für ESD; 74% für POEM). Die mediane Gefäßdetektionszeit des Algorithmus lag bei 0,32 Sekunden (0,3 Sekunden für ESD; 0,62 Sekunden für POEM). Die mittlere Gefäßdetektionsdauer lag bei 59,1% (60,6% für ESD; 44,8% für POEM) des Goldstandards. Alle Endoskopiker hatten mit KI-Unterstützung eine höhere Gefäßdetektionsrate als ohne KI. Die mittlere Gefäßdetektionsrate ohne KI lag bei 56,4%, mit KI bei 71,2% (p<0.001). Schlussfolgerung  KI-Unterstützung war mit einer statistisch signifikant höheren Gefäßdetektionsrate vergesellschaftet. Die mediane Gefäßdetektionszeit von deutlich unter einer Sekunde sowie eine Gefäßdetektionsdauer von größer 50% des Goldstandards wurden für den klinischen Einsatz als ausreichend erachtet. In prospektiven Anwendungsstudien sollte der KI-Algorithmus auf klinische Relevanz getestet werden. KW - Künstliche Intelligenz Y1 - 2023 U6 - https://doi.org/10.1055/s-0043-1771980 VL - 61 IS - 08 PB - Thieme CY - Stuttgart ER - TY - GEN A1 - Meinikheim, Michael A1 - Mendel, Robert A1 - Probst, Andreas A1 - Scheppach, Markus W. A1 - Messmann, Helmut A1 - Palm, Christoph A1 - Ebigbo, Alanna T1 - Barrett-Ampel T2 - Zeitschrift für Gastroenterologie N2 - Hintergrund  Adenokarzinome des Ösophagus sind bis heute mit einer infausten Prognose vergesellschaftet (1). Obwohl Endoskopiker mit Barrett-Ösophagus als Präkanzerose konfrontiert werden, ist vor allem für nicht-Experten die Differenzierung zwischen Barrett-Ösophagus ohne Dysplasie und assoziierten Neoplasien mitunter schwierig. Existierende Biopsieprotokolle (z.B. Seattle Protokoll) sind oftmals unzuverlässig (2). Eine frühzeitige Diagnose des Adenokarzinoms ist allerdings von fundamentaler Bedeutung für die Prognose des Patienten. Forschungsansatz  Auf der Grundlage dieser Problematik, entwickelten wir in Kooperation mit dem Forschungslabor „Regensburg Medical Image Computing (ReMIC)“ der OTH Regensburg ein auf künstlicher Intelligenz (KI) basiertes Entscheidungsunterstützungssystem (CDSS). Das auf einer DeepLabv3+ neuronalen Netzwerkarchitektur basierende CDSS differenziert mittels Mustererkennung Barrett- Ösophagus ohne Dysplasie von Barrett-Ösophagus mit Dysplasie bzw. Neoplasie („Klassifizierung“). Hierbei werden gemittelte Ausgabewahrscheinlichkeiten mit einem vom Benutzer definierten Schwellenwert verglichen. Für Vorhersagen, die den Schwellenwert überschreiten, berechnen wir die Kontur der Region und die Fläche. Sobald die vorhergesagte Läsion eine bestimmte Größe in der Eingabe überschreitet, heben wir sie und ihren Umriss hervor. So ermöglicht eine farbkodierte Visualisierung eine Abgrenzung zwischen Dysplasie bzw. Neoplasie und normalem Barrett-Epithel („Segmentierung“). In einer Studie an Bildern in „Weißlicht“ (WL) und „Narrow Band Imaging“ (NBI) demonstrierten wir eine Sensitivität von mehr als 90% und eine Spezifität von mehr als 80% (3). In einem nächsten Schritt, differenzierte unser KI-Algorithmus Barrett- Metaplasien von assoziierten Neoplasien anhand von zufällig abgegriffenen Bildern in Echtzeit mit einer Accuracy von 89.9% (4). Darauf folgend, entwickelten wir unser System dahingehend weiter, dass unser Algorithmus nun auch dazu in der Lage ist, Untersuchungsvideos in WL, NBI und „Texture and Color Enhancement Imaging“ (TXI) in Echtzeit zu analysieren (5). Aktuell führen wir eine Studie in einem randomisiert-kontrollierten Ansatz an unveränderten Untersuchungsvideos in WL, NBI und TXI durch. Ausblick  Um Patienten mit aus Barrett-Metaplasien resultierenden Neoplasien frühestmöglich an „High-Volume“-Zentren überweisen zu können, soll unser KI-Algorithmus zukünftig vor allem Endoskopiker ohne extensive Erfahrung bei der Beurteilung von Barrett- Ösophagus in der Krebsfrüherkennung unterstützen. KW - Barrett-Ösophagus KW - Adenokarzinom KW - Künstliche Intelligenz KW - Speiseröhrenkrebs KW - Diagnose KW - Künstliche Intelligenz Y1 - 2022 U6 - https://doi.org/10.1055/s-0042-1755109 VL - 60 IS - 08 PB - Georg Thieme Verlag CY - Stuttgart ER - TY - GEN A1 - Meinikheim, Michael A1 - Mendel, Robert A1 - Probst, Andreas A1 - Scheppach, Markus W. A1 - Messmann, Helmut A1 - Palm, Christoph A1 - Ebigbo, Alanna T1 - Optical Flow als Methode zur Qualitätssicherung KI-unterstützter Untersuchungen von Barrett-Ösophagus und Barrett-Ösophagus assoziierten Neoplasien T2 - Zeitschrift für Gastroenterologie N2 - Einleitung  Übermäßige Bewegung im Bild kann die Performance von auf künstlicher Intelligenz (KI) basierenden klinischen Entscheidungsunterstützungssystemen (CDSS) reduzieren. Optical Flow (OF) ist eine Methode zur Lokalisierung und Quantifizierung von Bewegungen zwischen aufeinanderfolgenden Bildern. Ziel  Ziel ist es, die Mensch-Computer-Interaktion (HCI) zu verbessern und Endoskopiker die unser KI-System „Barrett-Ampel“ zur Unterstützung bei der Beurteilung von Barrett-Ösophagus (BE) verwenden, ein Echtzeit-Feedback zur aktuellen Datenqualität anzubieten. Methodik  Dazu wurden unveränderte Videos in „Weißlicht“ (WL), „Narrow Band Imaging“ (NBI) und „Texture and Color Enhancement Imaging“ (TXI) von acht endoskopischen Untersuchungen von histologisch gesichertem BE und mit Barrett-Ösophagus assoziierten Neoplasien (BERN) durch unseren KI-Algorithmus analysiert. Der zur Bewertung der Bildqualität verwendete OF beinhaltete die mittlere Magnitude und die Entropie des Histogramms der Winkel. Frames wurden automatisch extrahiert, wenn die vordefinierten Schwellenwerte von 3,0 für die mittlere Magnitude und 9,0 für die Entropie des Histogramms der Winkel überschritten wurden. Experten sahen sich zunächst die Videos ohne KI-Unterstützung an und bewerteten, ob Störfaktoren die Sicherheit mit der eine Diagnose im vorliegenden Fall gestellt werden kann negativ beeinflussen. Anschließend überprüften sie die extrahierten Frames. Ergebnis  Gleichmäßige Bewegung in eine Richtung, wie etwa beim Vorschieben des Endoskops, spiegelte sich, bei insignifikant veränderter Entropie, in einer Erhöhung der Magnitude wider. Chaotische Bewegung, zum Beispiel während dem Spülen, war mit erhöhter Entropie assoziiert. Insgesamt war eine unruhige endoskopische Darstellung, Flüssigkeit sowie übermäßige Ösophagusmotilität mit erhöhtem OF assoziiert und korrelierte mit der Meinung der Experten über die Qualität der Videos. Der OF und die subjektive Wahrnehmung der Experten über die Verwertbarkeit der vorliegenden Bildsequenzen korrelierten direkt proportional. Wenn die vordefinierten Schwellenwerte des OF überschritten wurden, war die damit verbundene Bildqualität in 94% der Fälle für eine definitive Interpretation auch für Experten unzureichend. Schlussfolgerung  OF hat das Potenzial Endoskopiker ein Echtzeit-Feedback über die Qualität des Dateninputs zu bieten und so nicht nur die HCI zu verbessern, sondern auch die optimale Performance von KI-Algorithmen zu ermöglichen. KW - Optical Flow Y1 - 2022 U6 - https://doi.org/10.1055/s-0042-1754997 VL - 60 IS - 08 PB - Georg Thieme Verlag CY - Stuttgart ER - TY - GEN A1 - Meinikheim, Michael A1 - Mendel, Robert A1 - Probst, Andreas A1 - Scheppach, Markus W. A1 - Schnoy, Elisabeth A1 - Nagl, Sandra A1 - Römmele, Christoph A1 - Prinz, Friederike A1 - Schlottmann, Jakob A1 - Golger, Daniela A1 - Palm, Christoph A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - AI-assisted detection and characterization of early Barrett's neoplasia: Results of an Interim analysis T2 - Endoscopy N2 - Aims  Evaluation of the add-on effect an artificial intelligence (AI) based clinical decision support system has on the performance of endoscopists with different degrees of expertise in the field of Barrett's esophagus (BE) and Barrett's esophagus-related neoplasia (BERN). Methods  The support system is based on a multi-task deep learning model trained to solve a segmentation and several classification tasks. The training approach represents an extension of the ECMT semi-supervised learning algorithm. The complete system evaluates a decision tree between estimated motion, classification, segmentation, and temporal constraints, to decide when and how the prediction is highlighted to the observer. In our current study, ninety-six video cases of patients with BE and BERN were prospectively collected and assessed by Barrett's specialists and non-specialists. All video cases were evaluated twice – with and without AI assistance. The order of appearance, either with or without AI support, was assigned randomly. Participants were asked to detect and characterize regions of dysplasia or early neoplasia within the video sequences. Results  Standalone sensitivity, specificity, and accuracy of the AI system were 92.16%, 68.89%, and 81.25%, respectively. Mean sensitivity, specificity, and accuracy of expert endoscopists without AI support were 83,33%, 58,20%, and 71,48 %, respectively. Gastroenterologists without Barrett's expertise but with AI support had a comparable performance with a mean sensitivity, specificity, and accuracy of 76,63%, 65,35%, and 71,36%, respectively. Conclusions  Non-Barrett's experts with AI support had a similar performance as experts in a video-based study. Y1 - 2023 U6 - https://doi.org/10.1055/s-0043-1765437 VL - 55 IS - S02 PB - Thieme ER - TY - GEN A1 - Scheppach, Markus W. A1 - Mendel, Robert A1 - Probst, Andreas A1 - Rauber, David A1 - Rueckert, Tobias A1 - Meinikheim, Michael A1 - Palm, Christoph A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - Real-time detection and delineation of tissue during third-space endoscopy using artificial intelligence (AI) T2 - Endoscopy N2 - Aims  AI has proven great potential in assisting endoscopists in diagnostics, however its role in therapeutic endoscopy remains unclear. Endoscopic submucosal dissection (ESD) is a technically demanding intervention with a slow learning curve and relevant risks like bleeding and perforation. Therefore, we aimed to develop an algorithm for the real-time detection and delineation of relevant structures during third-space endoscopy. Methods  5470 still images from 59 full length videos (47 ESD, 12 POEM) were annotated. 179681 additional unlabeled images were added to the training dataset. Consequently, a DeepLabv3+ neural network architecture was trained with the ECMT semi-supervised algorithm (under review elsewhere). Evaluation of vessel detection was performed on a dataset of 101 standardized video clips from 15 separate third-space endoscopy videos with 200 predefined blood vessels. Results  Internal validation yielded an overall mean Dice score of 85% (68% for blood vessels, 86% for submucosal layer, 88% for muscle layer). On the video test data, the overall vessel detection rate (VDR) was 94% (96% for ESD, 74% for POEM). The median overall vessel detection time (VDT) was 0.32 sec (0.3 sec for ESD, 0.62 sec for POEM). Conclusions  Evaluation of the developed algorithm on a video test dataset showed high VDR and quick VDT, especially for ESD. Further research will focus on a possible clinical benefit of the AI application for VDR and VDT during third-space endoscopy. KW - Speiseröhrenkrankheit KW - Künstliche Intelligenz KW - Artificial Intelligence Y1 - 2023 U6 - https://doi.org/10.1055/s-0043-1765128 VL - 55 IS - S02 SP - S53 EP - S54 PB - Thieme ER - TY - JOUR A1 - Ebigbo, Alanna A1 - Mendel, Robert A1 - Probst, Andreas A1 - Manzeneder, Johannes A1 - Souza Jr., Luis Antonio de A1 - Papa, João Paulo A1 - Palm, Christoph A1 - Messmann, Helmut T1 - Computer-aided diagnosis using deep learning in the evaluation of early oesophageal adenocarcinoma JF - GuT N2 - Computer-aided diagnosis using deep learning (CAD-DL) may be an instrument to improve endoscopic assessment of Barrett’s oesophagus (BE) and early oesophageal adenocarcinoma (EAC). Based on still images from two databases, the diagnosis of EAC by CAD-DL reached sensitivities/specificities of 97%/88% (Augsburg data) and 92%/100% (Medical Image Computing and Computer-Assisted Intervention [MICCAI] data) for white light (WL) images and 94%/80% for narrow band images (NBI) (Augsburg data), respectively. Tumour margins delineated by experts into images were detected satisfactorily with a Dice coefficient (D) of 0.72. This could be a first step towards CAD-DL for BE assessment. If developed further, it could become a useful adjunctive tool for patient management. KW - Speiseröhrenkrebs KW - Diagnose KW - Computerunterstütztes Verfahren KW - Maschinelles Lernen Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:898-opus4-68 N1 - Corresponding authors: Alanna Ebigbo and Christoph Palm VL - 68 IS - 7 SP - 1143 EP - 1145 PB - British Society of Gastroenterology ER - TY - GEN A1 - Scheppach, Markus W. A1 - Mendel, Robert A1 - Rauber, David A1 - Probst, Andreas A1 - Nagl, Sandra A1 - Römmele, Christoph A1 - Meinikheim, Michael A1 - Palm, Christoph A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - Artificial Intelligence (AI) improves endoscopists’ vessel detection during endoscopic submucosal dissection (ESD) T2 - Endoscopy N2 - Aims While AI has been successfully implemented in detecting and characterizing colonic polyps, its role in therapeutic endoscopy remains to be elucidated. Especially third space endoscopy procedures like ESD and peroral endoscopic myotomy (POEM) pose a technical challenge and the risk of operator-dependent complications like intraprocedural bleeding and perforation. Therefore, we aimed at developing an AI-algorithm for intraprocedural real time vessel detection during ESD and POEM. Methods A training dataset consisting of 5470 annotated still images from 59 full-length videos (47 ESD, 12 POEM) and 179681 unlabeled images was used to train a DeepLabV3+neural network with the ECMT semi-supervised learning method. Evaluation for vessel detection rate (VDR) and time (VDT) of 19 endoscopists with and without AI-support was performed using a testing dataset of 101 standardized video clips with 200 predefined blood vessels. Endoscopists were stratified into trainees and experts in third space endoscopy. Results The AI algorithm had a mean VDR of 93.5% and a median VDT of 0.32 seconds. AI support was associated with a statistically significant increase in VDR from 54.9% to 73.0% and from 59.0% to 74.1% for trainees and experts, respectively. VDT significantly decreased from 7.21 sec to 5.09 sec for trainees and from 6.10 sec to 5.38 sec for experts in the AI-support group. False positive (FP) readings occurred in 4.5% of frames. FP structures were detected significantly shorter than true positives (0.71 sec vs. 5.99 sec). Conclusions AI improved VDR and VDT of trainees and experts in third space endoscopy and may reduce performance variability during training. Further research is needed to evaluate the clinical impact of this new technology. Y1 - 2024 U6 - https://doi.org/10.1055/s-0044-1782891 VL - 56 IS - S 02 SP - S93 PB - Thieme CY - Stuttgart ER - TY - CHAP A1 - Souza Jr., Luis Antonio de A1 - Passos, Leandro A. A1 - Mendel, Robert A1 - Ebigbo, Alanna A1 - Probst, Andreas A1 - Messmann, Helmut A1 - Palm, Christoph A1 - Papa, João Paulo T1 - Fine-tuning Generative Adversarial Networks using Metaheuristics BT - A Case Study on Barrett's Esophagus Identification T2 - Bildverarbeitung für die Medizin 2021. Proceedings, German Workshop on Medical Image Computing, Regensburg, March 7-9, 2021 N2 - Barrett's esophagus denotes a disorder in the digestive system that affects the esophagus' mucosal cells, causing reflux, and showing potential convergence to esophageal adenocarcinoma if not treated in initial stages. Thus, fast and reliable computer-aided diagnosis becomes considerably welcome. Nevertheless, such approaches usually suffer from imbalanced datasets, which can be addressed through Generative Adversarial Networks (GANs). Such techniques generate realistic images based on observed samples, even though at the cost of a proper selection of its hyperparameters. Many works employed a class of nature-inspired algorithms called metaheuristics to tackle the problem considering distinct deep learning approaches. Therefore, this paper's main contribution is to introduce metaheuristic techniques to fine-tune GANs in the context of Barrett's esophagus identification, as well as to investigate the feasibility of generating high-quality synthetic images for early-cancer assisted identification. KW - Endoskopie KW - Computerunterstützte Medizin KW - Deep Learning Y1 - 2021 SN - 978-3-658-33197-9 U6 - https://doi.org/10.1007/978-3-658-33198-6_50 SP - 205 EP - 210 PB - Springer Vieweg CY - Wiesbaden ER - TY - JOUR A1 - Souza Jr., Luis Antonio de A1 - Mendel, Robert A1 - Strasser, Sophia A1 - Ebigbo, Alanna A1 - Probst, Andreas A1 - Messmann, Helmut A1 - Papa, João Paulo A1 - Palm, Christoph T1 - Convolutional Neural Networks for the evaluation of cancer in Barrett’s esophagus: Explainable AI to lighten up the black-box JF - Computers in Biology and Medicine N2 - Even though artificial intelligence and machine learning have demonstrated remarkable performances in medical image computing, their level of accountability and transparency must be provided in such evaluations. The reliability related to machine learning predictions must be explained and interpreted, especially if diagnosis support is addressed. For this task, the black-box nature of deep learning techniques must be lightened up to transfer its promising results into clinical practice. Hence, we aim to investigate the use of explainable artificial intelligence techniques to quantitatively highlight discriminative regions during the classification of earlycancerous tissues in Barrett’s esophagus-diagnosed patients. Four Convolutional Neural Network models (AlexNet, SqueezeNet, ResNet50, and VGG16) were analyzed using five different interpretation techniques (saliency, guided backpropagation, integrated gradients, input × gradients, and DeepLIFT) to compare their agreement with experts’ previous annotations of cancerous tissue. We could show that saliency attributes match best with the manual experts’ delineations. Moreover, there is moderate to high correlation between the sensitivity of a model and the human-and-computer agreement. The results also lightened that the higher the model’s sensitivity, the stronger the correlation of human and computational segmentation agreement. We observed a relevant relation between computational learning and experts’ insights, demonstrating how human knowledge may influence the correct computational learning. KW - Deep Learning KW - Künstliche Intelligenz KW - Computerunterstützte Medizin KW - Barrett's esophagus KW - Adenocarcinoma KW - Machine learning KW - Explainable artificial intelligence KW - Computer-aided diagnosis Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:898-opus4-20126 SN - 0010-4825 VL - 135 SP - 1 EP - 14 PB - Elsevier ER - TY - JOUR A1 - Ebigbo, Alanna A1 - Mendel, Robert A1 - Scheppach, Markus W. A1 - Probst, Andreas A1 - Shahidi, Neal A1 - Prinz, Friederike A1 - Fleischmann, Carola A1 - Römmele, Christoph A1 - Gölder, Stefan Karl A1 - Braun, Georg A1 - Rauber, David A1 - Rückert, Tobias A1 - Souza Jr., Luis Antonio de A1 - Papa, João Paulo A1 - Byrne, Michael F. A1 - Palm, Christoph A1 - Messmann, Helmut T1 - Vessel and tissue recognition during third-space endoscopy using a deep learning algorithm JF - Gut N2 - In this study, we aimed to develop an artificial intelligence clinical decision support solution to mitigate operator-dependent limitations during complex endoscopic procedures such as endoscopic submucosal dissection and peroral endoscopic myotomy, for example, bleeding and perforation. A DeepLabv3-based model was trained to delineate vessels, tissue structures and instruments on endoscopic still images from such procedures. The mean cross-validated Intersection over Union and Dice Score were 63% and 76%, respectively. Applied to standardised video clips from third-space endoscopic procedures, the algorithm showed a mean vessel detection rate of 85% with a false-positive rate of 0.75/min. These performance statistics suggest a potential clinical benefit for procedure safety, time and also training. KW - Artificial Intelligence KW - Endoscopy KW - Medical Image Computing Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:898-opus4-54293 VL - 71 IS - 12 SP - 2388 EP - 2390 PB - BMJ CY - London ER - TY - GEN A1 - Ebigbo, Alanna A1 - Mendel, Robert A1 - Probst, Andreas A1 - Manzeneder, Johannes A1 - Souza Jr., Luis Antonio de A1 - Papa, João Paulo A1 - Palm, Christoph A1 - Messmann, Helmut T1 - Artificial Intelligence in Early Barrett's Cancer: The Segmentation Task T2 - Endoscopy N2 - Aims: The delineation of outer margins of early Barrett's cancer can be challenging even for experienced endoscopists. Artificial intelligence (AI) could assist endoscopists faced with this task. As of date, there is very limited experience in this domain. In this study, we demonstrate the measure of overlap (Dice coefficient = D) between highly experienced Barrett endoscopists and an AI system in the delineation of cancer margins (segmentation task). Methods: An AI system with a deep convolutional neural network (CNN) was trained and tested on high-definition endoscopic images of early Barrett's cancer (n = 33) and normal Barrett's mucosa (n = 41). The reference standard for the segmentation task were the manual delineations of tumor margins by three highly experienced Barrett endoscopists. Training of the AI system included patch generation, patch augmentation and adjustment of the CNN weights. Then, the segmentation results from patch classification and thresholding of the class probabilities. Segmentation results were evaluated using the Dice coefficient (D). Results: The Dice coefficient (D) which can range between 0 (no overlap) and 1 (complete overlap) was computed only for images correctly classified by the AI-system as cancerous. At a threshold of t = 0.5, a mean value of D = 0.72 was computed. Conclusions: AI with CNN performed reasonably well in the segmentation of the tumor region in Barrett's cancer, at least when compared with expert Barrett's endoscopists. AI holds a lot of promise as a tool for better visualization of tumor margins but may need further improvement and enhancement especially in real-time settings. KW - Speiseröhrenkrankheit KW - Maschinelles Lernen KW - Barrett's esphagus KW - Deep Learning KW - Segmentation Y1 - 2019 U6 - https://doi.org/10.1055/s-0039-1681187 VL - 51 IS - 04 SP - 6 PB - Georg Thieme Verlag CY - Stuttgart ER - TY - JOUR A1 - Ebigbo, Alanna A1 - Palm, Christoph A1 - Probst, Andreas A1 - Mendel, Robert A1 - Manzeneder, Johannes A1 - Prinz, Friederike A1 - Souza Jr., Luis Antonio de A1 - Papa, João Paulo A1 - Siersema, Peter A1 - Messmann, Helmut T1 - A technical review of artificial intelligence as applied to gastrointestinal endoscopy: clarifying the terminology JF - Endoscopy International Open N2 - The growing number of publications on the application of artificial intelligence (AI) in medicine underlines the enormous importance and potential of this emerging field of research. In gastrointestinal endoscopy, AI has been applied to all segments of the gastrointestinal tract most importantly in the detection and characterization of colorectal polyps. However, AI research has been published also in the stomach and esophagus for both neoplastic and non-neoplastic disorders. The various technical as well as medical aspects of AI, however, remain confusing especially for non-expert physicians. This physician-engineer co-authored review explains the basic technical aspects of AI and provides a comprehensive overview of recent publications on AI in gastrointestinal endoscopy. Finally, a basic insight is offered into understanding publications on AI in gastrointestinal endoscopy. KW - Diagnose KW - Maschinelles Lernen KW - Gastroenterologie KW - Künstliche Intelligenz KW - Barrett's esophagus KW - Deep learning Y1 - 2019 U6 - https://doi.org/10.1055/a-1010-5705 VL - 07 IS - 12 SP - 1616 EP - 1623 PB - Georg Thieme Verlag CY - Stuttgart ER - TY - JOUR A1 - Ebigbo, Alanna A1 - Mendel, Robert A1 - Probst, Andreas A1 - Manzeneder, Johannes A1 - Prinz, Friederike A1 - Souza Jr., Luis Antonio de A1 - Papa, João Paulo A1 - Palm, Christoph A1 - Messmann, Helmut T1 - Real-time use of artificial intelligence in the evaluation of cancer in Barrett’s oesophagus JF - Gut N2 - Based on previous work by our group with manual annotation of visible Barrett oesophagus (BE) cancer images, a real-time deep learning artificial intelligence (AI) system was developed. While an expert endoscopist conducts the endoscopic assessment of BE, our AI system captures random images from the real-time camera livestream and provides a global prediction (classification), as well as a dense prediction (segmentation) differentiating accurately between normal BE and early oesophageal adenocarcinoma (EAC). The AI system showed an accuracy of 89.9% on 14 cases with neoplastic BE. KW - Speiseröhrenkrankheit KW - Diagnose KW - Maschinelles Lernen KW - Barrett's esophagus KW - Deep learning KW - real-time Y1 - 2020 U6 - https://doi.org/10.1136/gutjnl-2019-319460 VL - 69 IS - 4 SP - 615 EP - 616 PB - BMJ CY - London ER - TY - JOUR A1 - Mendel, Robert A1 - Rauber, David A1 - Souza Jr., Luis Antonio de A1 - Papa, João Paulo A1 - Palm, Christoph T1 - Error-Correcting Mean-Teacher: Corrections instead of consistency-targets applied to semi-supervised medical image segmentation JF - Computers in Biology and Medicine N2 - Semantic segmentation is an essential task in medical imaging research. Many powerful deep-learning-based approaches can be employed for this problem, but they are dependent on the availability of an expansive labeled dataset. In this work, we augment such supervised segmentation models to be suitable for learning from unlabeled data. Our semi-supervised approach, termed Error-Correcting Mean-Teacher, uses an exponential moving average model like the original Mean Teacher but introduces our new paradigm of error correction. The original segmentation network is augmented to handle this secondary correction task. Both tasks build upon the core feature extraction layers of the model. For the correction task, features detected in the input image are fused with features detected in the predicted segmentation and further processed with task-specific decoder layers. The combination of image and segmentation features allows the model to correct present mistakes in the given input pair. The correction task is trained jointly on the labeled data. On unlabeled data, the exponential moving average of the original network corrects the student’s prediction. The combined outputs of the students’ prediction with the teachers’ correction form the basis for the semi-supervised update. We evaluate our method with the 2017 and 2018 Robotic Scene Segmentation data, the ISIC 2017 and the BraTS 2020 Challenges, a proprietary Endoscopic Submucosal Dissection dataset, Cityscapes, and Pascal VOC 2012. Additionally, we analyze the impact of the individual components and examine the behavior when the amount of labeled data varies, with experiments performed on two distinct segmentation architectures. Our method shows improvements in terms of the mean Intersection over Union over the supervised baseline and competing methods. Code is available at https://github.com/CloneRob/ECMT. KW - Semi-supervised Segmentation KW - Mean-Teacher KW - Pseudo-labels KW - Medical Imaging Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:898-opus4-57790 SN - 0010-4825 VL - 154 IS - March PB - Elsevier ER - TY - JOUR A1 - Ebigbo, Alanna A1 - Mendel, Robert A1 - Rückert, Tobias A1 - Schuster, Laurin A1 - Probst, Andreas A1 - Manzeneder, Johannes A1 - Prinz, Friederike A1 - Mende, Matthias A1 - Steinbrück, Ingo A1 - Faiss, Siegbert A1 - Rauber, David A1 - Souza Jr., Luis Antonio de A1 - Papa, João Paulo A1 - Deprez, Pierre A1 - Oyama, Tsuneo A1 - Takahashi, Akiko A1 - Seewald, Stefan A1 - Sharma, Prateek A1 - Byrne, Michael F. A1 - Palm, Christoph A1 - Messmann, Helmut T1 - Endoscopic prediction of submucosal invasion in Barrett’s cancer with the use of Artificial Intelligence: A pilot Study JF - Endoscopy N2 - Background and aims: The accurate differentiation between T1a and T1b Barrett’s cancer has both therapeutic and prognostic implications but is challenging even for experienced physicians. We trained an Artificial Intelligence (AI) system on the basis of deep artificial neural networks (deep learning) to differentiate between T1a and T1b Barrett’s cancer white-light images. Methods: Endoscopic images from three tertiary care centres in Germany were collected retrospectively. A deep learning system was trained and tested using the principles of cross-validation. A total of 230 white-light endoscopic images (108 T1a and 122 T1b) was evaluated with the AI-system. For comparison, the images were also classified by experts specialized in endoscopic diagnosis and treatment of Barrett’s cancer. Results: The sensitivity, specificity, F1 and accuracy of the AI-system in the differentiation between T1a and T1b cancer lesions was 0.77, 0.64, 0.73 and 0.71, respectively. There was no statistically significant difference between the performance of the AI-system and that of human experts with sensitivity, specificity, F1 and accuracy of 0.63, 0.78, 0.67 and 0.70 respectively. Conclusion: This pilot study demonstrates the first multicenter application of an AI-based system in the prediction of submucosal invasion in endoscopic images of Barrett’s cancer. AI scored equal to international experts in the field, but more work is necessary to improve the system and apply it to video sequences and in a real-life setting. Nevertheless, the correct prediction of submucosal invasion in Barret´s cancer remains challenging for both experts and AI. KW - Maschinelles Lernen KW - Neuronales Netz KW - Speiseröhrenkrebs KW - Diagnose KW - Artificial Intelligence KW - Machine learning KW - Adenocarcinoma KW - Barrett’s cancer KW - submucosal invasion Y1 - 2021 U6 - https://doi.org/10.1055/a-1311-8570 VL - 53 IS - 09 SP - 878 EP - 883 PB - Thieme CY - Stuttgart ER - TY - JOUR A1 - Souza Jr., Luis Antonio de A1 - Pacheco, André G.C. A1 - Passos, Leandro A. A1 - Santana, Marcos C. S. A1 - Mendel, Robert A1 - Ebigbo, Alanna A1 - Probst, Andreas A1 - Messmann, Helmut A1 - Palm, Christoph A1 - Papa, João Paulo T1 - DeepCraftFuse: visual and deeply-learnable features work better together for esophageal cancer detection in patients with Barrett’s esophagus JF - Neural Computing and Applications N2 - Limitations in computer-assisted diagnosis include lack of labeled data and inability to model the relation between what experts see and what computers learn. Even though artificial intelligence and machine learning have demonstrated remarkable performances in medical image computing, their accountability and transparency level must be improved to transfer this success into clinical practice. The reliability of machine learning decisions must be explained and interpreted, especially for supporting the medical diagnosis. While deep learning techniques are broad so that unseen information might help learn patterns of interest, human insights to describe objects of interest help in decision-making. This paper proposes a novel approach, DeepCraftFuse, to address the challenge of combining information provided by deep networks with visual-based features to significantly enhance the correct identification of cancerous tissues in patients affected with Barrett’s esophagus (BE). We demonstrate that DeepCraftFuse outperforms state-of-the-art techniques on private and public datasets, reaching results of around 95% when distinguishing patients affected by BE that is either positive or negative to esophageal cancer. KW - Deep Learning KW - Speiseröhrenkrebs KW - Adenocarcinom KW - Endobrachyösophagus KW - Diagnose KW - Maschinelles Lernen KW - Machine learning KW - Adenocarcinoma KW - Object detector KW - Barrett’s esophagus KW - Deep Learning Y1 - 2024 U6 - https://doi.org/10.1007/s00521-024-09615-z VL - 36 SP - 10445 EP - 10459 PB - Springer CY - London ER - TY - JOUR A1 - Souza Jr., Luis Antonio de A1 - Palm, Christoph A1 - Mendel, Robert A1 - Hook, Christian A1 - Ebigbo, Alanna A1 - Probst, Andreas A1 - Messmann, Helmut A1 - Weber, Silke A. T. A1 - Papa, João Paulo T1 - A survey on Barrett's esophagus analysis using machine learning JF - Computers in Biology and Medicine N2 - This work presents a systematic review concerning recent studies and technologies of machine learning for Barrett's esophagus (BE) diagnosis and treatment. The use of artificial intelligence is a brand new and promising way to evaluate such disease. We compile some works published at some well-established databases, such as Science Direct, IEEEXplore, PubMed, Plos One, Multidisciplinary Digital Publishing Institute (MDPI), Association for Computing Machinery (ACM), Springer, and Hindawi Publishing Corporation. Each selected work has been analyzed to present its objective, methodology, and results. The BE progression to dysplasia or adenocarcinoma shows a complex pattern to be detected during endoscopic surveillance. Therefore, it is valuable to assist its diagnosis and automatic identification using computer analysis. The evaluation of the BE dysplasia can be performed through manual or automated segmentation through machine learning techniques. Finally, in this survey, we reviewed recent studies focused on the automatic detection of the neoplastic region for classification purposes using machine learning methods. KW - Speiseröhrenkrankheit KW - Diagnose KW - Mustererkennung KW - Maschinelles Lernen KW - Literaturbericht KW - Barrett's esophagus KW - Machine learning KW - Adenocarcinoma KW - Image processing KW - Pattern recognition KW - Computer-aided diagnosis Y1 - 2018 U6 - https://doi.org/10.1016/j.compbiomed.2018.03.014 VL - 96 SP - 203 EP - 213 PB - Elsevier ER - TY - JOUR A1 - Passos, Leandro A. A1 - Souza Jr., Luis Antonio de A1 - Mendel, Robert A1 - Ebigbo, Alanna A1 - Probst, Andreas A1 - Messmann, Helmut A1 - Palm, Christoph A1 - Papa, João Paulo T1 - Barrett's esophagus analysis using infinity Restricted Boltzmann Machines JF - Journal of Visual Communication and Image Representation N2 - The number of patients with Barret’s esophagus (BE) has increased in the last decades. Considering the dangerousness of the disease and its evolution to adenocarcinoma, an early diagnosis of BE may provide a high probability of cancer remission. However, limitations regarding traditional methods of detection and management of BE demand alternative solutions. As such, computer-aided tools have been recently used to assist in this problem, but the challenge still persists. To manage the problem, we introduce the infinity Restricted Boltzmann Machines (iRBMs) to the task of automatic identification of Barrett’s esophagus from endoscopic images of the lower esophagus. Moreover, since iRBM requires a proper selection of its meta-parameters, we also present a discriminative iRBM fine-tuning using six meta-heuristic optimization techniques. We showed that iRBMs are suitable for the context since it provides competitive results, as well as the meta-heuristic techniques showed to be appropriate for such task. KW - Speiseröhrenkrankheit KW - Diagnose KW - Boltzmann-Maschine KW - Barrett’s esophagus KW - Infinity Restricted Boltzmann Machines KW - Meta-heuristics KW - Deep learning KW - Metaheuristik KW - Maschinelles Lernen Y1 - 2019 U6 - https://doi.org/10.1016/j.jvcir.2019.01.043 VL - 59 SP - 475 EP - 485 PB - Elsevier ER - TY - CHAP A1 - Souza Jr., Luis Antonio de A1 - Hook, Christian A1 - Papa, João Paulo A1 - Palm, Christoph T1 - Barrett's Esophagus Analysis Using SURF Features T2 - Bildverarbeitung für die Medizin 2017; Algorithmen - Systeme - Anwendungen. Proceedings des Workshops vom 12. bis 14. März 2017 in Heidelberg N2 - The development of adenocarcinoma in Barrett’s esophagus is difficult to detect by endoscopic surveillance of patients with signs of dysplasia. Computer assisted diagnosis of endoscopic images (CAD) could therefore be most helpful in the demarcation and classification of neoplastic lesions. In this study we tested the feasibility of a CAD method based on Speeded up Robust Feature Detection (SURF). A given database containing 100 images from 39 patients served as benchmark for feature based classification models. Half of the images had previously been diagnosed by five clinical experts as being ”cancerous”, the other half as ”non-cancerous”. Cancerous image regions had been visibly delineated (masked) by the clinicians. SURF features acquired from full images as well as from masked areas were utilized for the supervised training and testing of an SVM classifier. The predictive accuracy of the developed CAD system is illustrated by sensitivity and specificity values. The results based on full image matching where 0.78 (sensitivity) and 0.82 (specificity) were achieved, while the masked region approach generated results of 0.90 and 0.95, respectively. KW - Speiseröhrenkrankheit KW - Diagnose KW - Maschinelles Sehen KW - Automatische Klassifikation Y1 - 2017 U6 - https://doi.org/10.1007/978-3-662-54345-0_34 SP - 141 EP - 146 PB - Springer CY - Berlin ER - TY - CHAP A1 - Mendel, Robert A1 - Souza Jr., Luis Antonio de A1 - Rauber, David A1 - Papa, João Paulo A1 - Palm, Christoph T1 - Semi-supervised Segmentation Based on Error-Correcting Supervision T2 - Computer vision - ECCV 2020: 16th European conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XXIX N2 - Pixel-level classification is an essential part of computer vision. For learning from labeled data, many powerful deep learning models have been developed recently. In this work, we augment such supervised segmentation models by allowing them to learn from unlabeled data. Our semi-supervised approach, termed Error-Correcting Supervision, leverages a collaborative strategy. Apart from the supervised training on the labeled data, the segmentation network is judged by an additional network. The secondary correction network learns on the labeled data to optimally spot correct predictions, as well as to amend incorrect ones. As auxiliary regularization term, the corrector directly influences the supervised training of the segmentation network. On unlabeled data, the output of the correction network is essential to create a proxy for the unknown truth. The corrector’s output is combined with the segmentation network’s prediction to form the new target. We propose a loss function that incorporates both the pseudo-labels as well as the predictive certainty of the correction network. Our approach can easily be added to supervised segmentation models. We show consistent improvements over a supervised baseline on experiments on both the Pascal VOC 2012 and the Cityscapes datasets with varying amounts of labeled data. KW - Semi-Supervised Learning KW - Machine Learning Y1 - 2020 SN - 978-3-030-58525-9 U6 - https://doi.org/10.1007/978-3-030-58526-6_9 SP - 141 EP - 157 PB - Springer CY - Cham ER - TY - CHAP A1 - Souza Jr., Luis Antonio de A1 - Ebigbo, Alanna A1 - Probst, Andreas A1 - Messmann, Helmut A1 - Papa, João Paulo A1 - Mendel, Robert A1 - Palm, Christoph T1 - Barrett's Esophagus Identification Using Color Co-occurrence Matrices T2 - 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI), Parana, 2018 N2 - In this work, we propose the use of single channel Color Co-occurrence Matrices for texture description of Barrett’sEsophagus (BE)and adenocarcinoma images. Further classification using supervised learning techniques, such as Optimum-Path Forest (OPF), Support Vector Machines with Radial Basisunction (SVM-RBF) and Bayesian classifier supports the contextof automatic BE and adenocarcinoma diagnosis. We validated three approaches of classification based on patches, patients and images in two datasets (MICCAI 2015 and Augsburg) using the color-and-texture descriptors and the machine learning techniques. Concerning MICCAI 2015 dataset, the best results were obtained using the blue channel for the descriptors and the supervised OPF for classification purposes in the patch-based approach, with sensitivity nearly to 73% for positive adenocarcinoma identification and specificity close to 77% for BE (non-cancerous) patch classification. Regarding the Augsburg dataset, the most accurate results were also obtained using both OPF classifier and blue channel descriptor for the feature extraction, with sensitivity close to 67% and specificity around to76%. Our work highlights new advances in the related research area and provides a promising technique that combines color and texture information, allied to three different approaches of dataset pre-processing aiming to configure robust scenarios for the classification step. KW - Barrett’s Esophagus KW - Co-occurrence Matrices KW - Machine learning KW - Texture Analysis Y1 - 2018 U6 - https://doi.org/10.1109/SIBGRAPI.2018.00028 SP - 166 EP - 173 ER - TY - CHAP A1 - Souza Jr., Luis Antonio de A1 - Afonso, Luis Claudio Sugi A1 - Palm, Christoph A1 - Papa, João Paulo T1 - Barrett's Esophagus Identification Using Optimum-Path Forest T2 - Proceedings of the 30th Conference on Graphics, Patterns and Images Tutorials (SIBGRAPI-T 2017), Niterói, Rio de Janeiro, Brazil, 2017, 17-20 October N2 - Computer-assisted analysis of endoscopic images can be helpful to the automatic diagnosis and classification of neoplastic lesions. Barrett's esophagus (BE) is a common type of reflux that is not straight forward to be detected by endoscopic surveillance, thus being way susceptible to erroneous diagnosis, which can cause cancer when not treated properly. In this work, we introduce the Optimum-Path Forest (OPF) classifier to the task of automatic identification of Barrett'sesophagus, with promising results and outperforming the well known Support Vector Machines (SVM) in the aforementioned context. We consider describing endoscopic images by means of feature extractors based on key point information, such as the Speeded up Robust Features (SURF) and Scale-Invariant Feature Transform (SIFT), for further designing a bag-of-visual-wordsthat is used to feed both OPF and SVM classifiers. The best results were obtained by means of the OPF classifier for both feature extractors, with values lying on 0.732 (SURF) - 0.735(SIFT) for sensitivity, 0.782 (SURF) - 0.806 (SIFT) for specificity, and 0.738 (SURF) - 0.732 (SIFT) for the accuracy. KW - Speiseröhrenkrankheit KW - Diagnose KW - Maschinelles Lernen KW - Bilderkennung KW - Automatische Klassifikation Y1 - 2017 U6 - https://doi.org/10.1109/SIBGRAPI.2017.47 SP - 308 EP - 314 ER - TY - CHAP A1 - Maier, Johannes A1 - Haug, Sonja A1 - Huber, Michaela A1 - Katzky, Uwe A1 - Neumann, Sabine A1 - Perret, Jérôme A1 - Prinzen, Martin A1 - Weber, Karsten A1 - Wittenberg, Thomas A1 - Wöhl, Rebecca A1 - Scorna, Ulrike A1 - Palm, Christoph T1 - Development of a haptic and visual assisted training simulation concept for complex bone drilling in minimally invasive hand surgery T2 - CARS Conference, 5.10.-7.10.2017 Y1 - 2017 ER - TY - JOUR A1 - Meinikheim, Michael A1 - Mendel, Robert A1 - Palm, Christoph A1 - Probst, Andreas A1 - Muzalyova, Anna A1 - Scheppach, Markus W. A1 - Nagl, Sandra A1 - Schnoy, Elisabeth A1 - Römmele, Christoph A1 - Schulz, Dominik A. H. A1 - Schlottmann, Jakob A1 - Prinz, Friederike A1 - Rauber, David A1 - Rueckert, Tobias A1 - Matsumura, Tomoaki A1 - Fernández-Esparrach, Glòria A1 - Parsa, Nasim A1 - Byrne, Michael F. A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - Influence of artificial intelligence on the diagnostic performance of endoscopists in the assessment of Barrett’s esophagus: a tandem randomized and video trial JF - Endoscopy N2 - Background This study evaluated the effect of an artificial intelligence (AI)-based clinical decision support system on the performance and diagnostic confidence of endoscopists in their assessment of Barrett’s esophagus (BE). Methods 96 standardized endoscopy videos were assessed by 22 endoscopists with varying degrees of BE experience from 12 centers. Assessment was randomized into two video sets: group A (review first without AI and second with AI) and group B (review first with AI and second without AI). Endoscopists were required to evaluate each video for the presence of Barrett’s esophagus-related neoplasia (BERN) and then decide on a spot for a targeted biopsy. After the second assessment, they were allowed to change their clinical decision and confidence level. Results AI had a stand-alone sensitivity, specificity, and accuracy of 92.2%, 68.9%, and 81.3%, respectively. Without AI, BE experts had an overall sensitivity, specificity, and accuracy of 83.3%, 58.1%, and 71.5%, respectively. With AI, BE nonexperts showed a significant improvement in sensitivity and specificity when videos were assessed a second time with AI (sensitivity 69.8% [95%CI 65.2%–74.2%] to 78.0% [95%CI 74.0%–82.0%]; specificity 67.3% [95%CI 62.5%–72.2%] to 72.7% [95%CI 68.2%–77.3%]). In addition, the diagnostic confidence of BE nonexperts improved significantly with AI. Conclusion BE nonexperts benefitted significantly from additional AI. BE experts and nonexperts remained significantly below the stand-alone performance of AI, suggesting that there may be other factors influencing endoscopists’ decisions to follow or discard AI advice. KW - Artificial Intelligence KW - Endoscopy KW - Medical Image Computing Y1 - 2024 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:898-opus4-72818 PB - Georg Thieme Verlag CY - Stuttgart ER - TY - CHAP A1 - Franz, Daniela A1 - Katzky, Uwe A1 - Neumann, S. A1 - Perret, Jerome A1 - Hofer, Mathias A1 - Huber, Michaela A1 - Schmitt-Rüth, Stephanie A1 - Haug, Sonja A1 - Weber, Karsten A1 - Prinzen, Martin A1 - Palm, Christoph A1 - Wittenberg, Thomas T1 - Haptisches Lernen für Cochlea Implantationen BT - Konzept - HaptiVisT Projekt T2 - 15. Jahrestagung der Deutschen Gesellschaft für Computer- und Roboterassistierte Chirurgie (CURAC2016), Tagungsband, 2016, Bern, 29.09. - 01.10. N2 - Die Implantation eines Cochlea Implantates benötigt einen chirurgischen Zugang im Felsenbein und durch die Paukenhöhle des Patienten. Der Chirurg hat eine eingeschränkte Sicht im Operationsgebiet, die weiterhin viele Risikostrukturen enthält. Um eine Cochlea Implantation sicher und fehlerfrei durchzuführen, ist eine umfangreiche theoretische und praktische (teilweise berufsbegleitende) Fortbildung sowie langjährige Erfahrung notwendig. Unter Nutzung von realen klinischen CT/MRT Daten von Innen- und Mittelohr und der interaktiven Segmentierung der darin abgebildeten Strukturen (Nerven, Cochlea, Gehörknöchelchen,...) wird im HaptiVisT Projekt ein haptisch-visuelles Trainingssystem für die Implantation von Innen- und Mittelohr-Implantaten realisiert, das als sog. „Serious Game“ mit immersiver Didaktik gestaltet wird. Die Evaluierung des Demonstrators hinsichtlich Zweckmäßigkeit erfolgt prozessbegleitend und ergebnisorientiert, um mögliche technische oder didaktische Fehler vor Fertigstellung des Systems aufzudecken. Drei zeitlich versetzte Evaluationen fokussieren dabei chirurgisch-fachliche, didaktische sowie haptisch-ergonomische Akzeptanzkriterien. KW - Virtuelles Training KW - Haptisches Feedback KW - Gamification in der Medizin KW - Cochlea-Implantat KW - Operationstechnik KW - Simulation KW - Haptische Feedback-Technologie KW - Lernprogramm Y1 - 2016 UR - https://curac.org/images/advportfoliopro/images/CURAC2016/CURAC%202016%20Tagungsband.pdf SP - 21 EP - 26 ER - TY - GEN A1 - Ebigbo, Alanna A1 - Rauber, David A1 - Ayoub, Mousa A1 - Birzle, Lisa A1 - Matsumura, Tomoaki A1 - Probst, Andreas A1 - Steinbrück, Ingo A1 - Nagl, Sandra A1 - Römmele, Christoph A1 - Meinikheim, Michael A1 - Scheppach, Markus W. A1 - Palm, Christoph A1 - Messmann, Helmut T1 - Early Esophageal Cancer and the Generalizability of Artificial Intelligence T2 - Endoscopy N2 - Aims Artificial Intelligence (AI) systems in gastrointestinal endoscopy are narrow because they are trained to solve only one specific task. Unlike Narrow-AI, general AI systems may be able to solve multiple and unrelated tasks. We aimed to understand whether an AI system trained to detect, characterize, and segment early Barrett’s neoplasia (Barrett’s AI) is only capable of detecting this pathology or can also detect and segment other diseases like early squamous cell cancer (SCC). Methods 120 white light (WL) and narrow-band endoscopic images (NBI) from 60 patients (1 WL and 1 NBI image per patient) were extracted from the endoscopic database of the University Hospital Augsburg. Images were annotated by three expert endoscopists with extensive experience in the diagnosis and endoscopic resection of early esophageal neoplasias. An AI system based on DeepLabV3+architecture dedicated to early Barrett’s neoplasia was tested on these images. The AI system was neither trained with SCC images nor had it seen the test images prior to evaluation. The overlap between the three expert annotations („expert-agreement“) was the ground truth for evaluating AI performance. Results Barrett’s AI detected early SCC with a mean intersection over reference (IoR) of 92% when at least 1 pixel of the AI prediction overlapped with the expert-agreement. When the threshold was increased to 5%, 10%, and 20% overlap with the expert-agreement, the IoR was 88%, 85% and 82%, respectively. The mean Intersection Over Union (IoU) – a metric according to segmentation quality between the AI prediction and the expert-agreement – was 0.45. The mean expert IoU as a measure of agreement between the three experts was 0.60. Conclusions In the context of this pilot study, the predictions of SCC by a Barrett’s dedicated AI showed some overlap to the expert-agreement. Therefore, features learned from Barrett’s cancer-related training might be helpful also for SCC prediction. Our results allow different possible explanations. On the one hand, some Barrett’s cancer features generalize toward the related task of assessing early SCC. On the other hand, the Barrett’s AI is less specific to Barrett’s cancer than a general predictor of pathological tissue. However, we expect to enhance the detection quality significantly by extending the training to SCC-specific data. The insight of this study opens the way towards a transfer learning approach for more efficient training of AI to solve tasks in other domains. Y1 - 2024 U6 - https://doi.org/10.1055/s-0044-1783775 VL - 56 IS - S 02 SP - S428 PB - Thieme CY - Stuttgart ER - TY - CHAP A1 - Rueckert, Tobias A1 - Rieder, Maximilian A1 - Feussner, Hubertus A1 - Wilhelm, Dirk A1 - Rueckert, Daniel A1 - Palm, Christoph ED - Maier, Andreas ED - Deserno, Thomas M. ED - Handels, Heinz ED - Maier-Hein, Klaus ED - Palm, Christoph ED - Tolxdorff, Thomas T1 - Smoke Classification in Laparoscopic Cholecystectomy Videos Incorporating Spatio-temporal Information T2 - Bildverarbeitung für die Medizin 2024: Proceedings, German Workshop on Medical Image Computing, March 10-12, 2024, Erlangen N2 - Heavy smoke development represents an important challenge for operating physicians during laparoscopic procedures and can potentially affect the success of an intervention due to reduced visibility and orientation. Reliable and accurate recognition of smoke is therefore a prerequisite for the use of downstream systems such as automated smoke evacuation systems. Current approaches distinguish between non-smoked and smoked frames but often ignore the temporal context inherent in endoscopic video data. In this work, we therefore present a method that utilizes the pixel-wise displacement from randomly sampled images to the preceding frames determined using the optical flow algorithm by providing the transformed magnitude of the displacement as an additional input to the network. Further, we incorporate the temporal context at evaluation time by applying an exponential moving average on the estimated class probabilities of the model output to obtain more stable and robust results over time. We evaluate our method on two convolutional-based and one state-of-the-art transformer architecture and show improvements in the classification results over a baseline approach, regardless of the network used. Y1 - 2024 U6 - https://doi.org/10.1007/978-3-658-44037-4_78 SP - 298 EP - 303 PB - Springeer CY - Wiesbaden ER - TY - GEN A1 - Zellmer, Stephan A1 - Rauber, David A1 - Probst, Andreas A1 - Weber, Tobias A1 - Braun, Georg A1 - Römmele, Christoph A1 - Nagl, Sandra A1 - Schnoy, Elisabeth A1 - Messmann, Helmut A1 - Ebigbo, Alanna A1 - Palm, Christoph T1 - Artificial intelligence as a tool in the detection of the papillary ostium during ERCP T2 - Endoscopy N2 - Aims Endoscopic retrograde cholangiopancreaticography (ERCP) is the gold standard in the diagnosis as well as treatment of diseases of the pancreatobiliary tract. However, it is technically complex and has a relatively high complication rate. In particular, cannulation of the papillary ostium remains challenging. The aim of this study is to examine whether a deep-learning algorithm can be used to detect the major duodenal papilla and in particular the papillary ostium reliably and could therefore be a valuable tool for inexperienced endoscopists, particularly in training situation. Methods We analyzed a total of 654 retrospectively collected images of 85 patients. Both the major duodenal papilla and the ostium were then segmented. Afterwards, a neural network was trained using a deep-learning algorithm. A 5-fold cross-validation was performed. Subsequently, we ran the algorithm on 5 prospectively collected videos of ERCPs. Results 5-fold cross-validation on the 654 labeled data resulted in an F1 value of 0.8007, a sensitivity of 0.8409 and a specificity of 0.9757 for the class papilla, and an F1 value of 0.5724, a sensitivity of 0.5456 and a specificity of 0.9966 for the class ostium. Regardless of the class, the average F1 value (class papilla and class ostium) was 0.6866, the sensitivity 0.6933 and the specificity 0.9861. In 100% of cases the AI-detected localization of the papillary ostium in the prospectively collected videos corresponded to the localization of the cannulation performed by the endoscopist. Conclusions In the present study, the neural network was able to identify the major duodenal papilla with a high sensitivity and high specificity. In detecting the papillary ostium, the sensitivity was notably lower. However, when used on videos, the AI was able to identify the location of the subsequent cannulation with 100% accuracy. In the future, the neural network will be trained with more data. Thus, a suitable tool for ERCP could be established, especially in the training situation. Y1 - 2024 U6 - https://doi.org/10.1055/s-0044-1783138 VL - 56 IS - S 02 SP - S198 PB - Thieme CY - Stuttgart ER - TY - GEN A1 - Scheppach, Markus W. A1 - Weber Nunes, Danilo A1 - Arizi, X. A1 - Rauber, David A1 - Probst, Andreas A1 - Nagl, Sandra A1 - Römmele, Christoph A1 - Meinikheim, Michael A1 - Palm, Christoph A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - Procedural phase recognition in endoscopic submucosal dissection (ESD) using artificial intelligence (AI) T2 - Endoscopy N2 - Aims Recent evidence suggests the possibility of intraprocedural phase recognition in surgical operations as well as endoscopic interventions such as peroral endoscopic myotomy and endoscopic submucosal dissection (ESD) by AI-algorithms. The intricate measurement of intraprocedural phase distribution may deepen the understanding of the procedure. Furthermore, real-time quality assessment as well as automation of reporting may become possible. Therefore, we aimed to develop an AI-algorithm for intraprocedural phase recognition during ESD. Methods A training dataset of 364385 single images from 9 full-length ESD videos was compiled. Each frame was classified into one procedural phase. Phases included scope manipulation, marking, injection, application of electrical current and bleeding. Allocation of each frame was only possible to one category. This training dataset was used to train a Video Swin transformer to recognize the phases. Temporal information was included via logarithmic frame sampling. Validation was performed using two separate ESD videos with 29801 single frames. Results The validation yielded sensitivities of 97.81%, 97.83%, 95.53%, 85.01% and 87.55% for scope manipulation, marking, injection, electric application and bleeding, respectively. Specificities of 77.78%, 90.91%, 95.91%, 93.65% and 84.76% were measured for the same parameters. Conclusions The developed algorithm was able to classify full-length ESD videos on a frame-by-frame basis into the predefined classes with high sensitivities and specificities. Future research will aim at the development of quality metrics based on single-operator phase distribution. Y1 - 2024 U6 - https://doi.org/10.1055/s-0044-1783804 VL - 56 IS - S 02 SP - S439 PB - Thieme CY - Stuttgart ER - TY - GEN ED - Maier, Andreas ED - Deserno, Thomas M. ED - Handels, Heinz ED - Maier-Hein, Klaus ED - Palm, Christoph ED - Tolxdorff, Thomas T1 - Bildverarbeitung für die Medizin 2024 BT - Proceedings, German Workshop on Medical Image Computing, Erlangen, March 10-12, 2024 N2 - Seit mehr als 25 Jahren ist der Workshop "Bildverarbeitung für die Medizin" als erfolgreiche Veranstaltung etabliert. Ziel ist auch 2024 wieder die Darstellung aktueller Forschungsergebnisse und die Vertiefung der Gespräche zwischen Wissenschaftlern, Industrie und Anwendern. Die Beiträge dieses Bandes - viele davon in englischer Sprache - umfassen alle Bereiche der medizinischen Bildverarbeitung, insbesondere die Bildgebung und -akquisition, Segmentierung und Analyse, Visualisierung und Animation, computerunterstützte Diagnose sowie bildgestützte Therapieplanung und Therapie. Hierbei kommen Methoden des maschinelles Lernens, der biomechanischen Modellierung sowie der Validierung und Qualitätssicherung zum Einsatz. KW - Bildverarbeitung KW - Computerunterstützte Medizin KW - Bildgebendes Verfahren KW - Bildanalyse KW - Deep Learning Y1 - 2024 SN - 978-3-658-44037-4 U6 - https://doi.org/10.1007/978-3-658-44037-4 SN - 1431-472X PB - Springer CY - Wiesbaden ER - TY - CHAP A1 - Gutbrod, Max A1 - Geisler, Benedikt A1 - Rauber, David A1 - Palm, Christoph ED - Maier, Andreas ED - Deserno, Thomas M. ED - Handels, Heinz ED - Maier-Hein, Klaus ED - Palm, Christoph ED - Tolxdorff, Thomas T1 - Data Augmentation for Images of Chronic Foot Wounds T2 - Bildverarbeitung für die Medizin 2024: Proceedings, German Workshop on Medical Image Computing, March 10-12, 2024, Erlangen N2 - Training data for Neural Networks is often scarce in the medical domain, which often results in models that struggle to generalize and consequently showpoor performance on unseen datasets. Generally, adding augmentation methods to the training pipeline considerably enhances a model’s performance. Using the dataset of the Foot Ulcer Segmentation Challenge, we analyze two additional augmentation methods in the domain of chronic foot wounds - local warping of wound edges along with projection and blurring of shapes inside wounds. Our experiments show that improvements in the Dice similarity coefficient and Normalized Surface Distance metrics depend on a sensible selection of those augmentation methods. Y1 - 2024 U6 - https://doi.org/10.1007/978-3-658-44037-4_71 SP - 261 EP - 266 PB - Springer CY - Wiesbaden ER - TY - JOUR A1 - Souza Jr., Luis Antonio de A1 - Passos, Leandro A. A1 - Santana, Marcos Cleison S. A1 - Mendel, Robert A1 - Rauber, David A1 - Ebigbo, Alanna A1 - Probst, Andreas A1 - Messmann, Helmut A1 - Papa, João Paulo A1 - Palm, Christoph T1 - Layer-selective deep representation to improve esophageal cancer classification JF - Medical & Biological Engineering & Computing N2 - Even though artificial intelligence and machine learning have demonstrated remarkable performances in medical image computing, their accountability and transparency level must be improved to transfer this success into clinical practice. The reliability of machine learning decisions must be explained and interpreted, especially for supporting the medical diagnosis.For this task, the deep learning techniques’ black-box nature must somehow be lightened up to clarify its promising results. Hence, we aim to investigate the impact of the ResNet-50 deep convolutional design for Barrett’s esophagus and adenocarcinoma classification. For such a task, and aiming at proposing a two-step learning technique, the output of each convolutional layer that composes the ResNet-50 architecture was trained and classified for further definition of layers that would provide more impact in the architecture. We showed that local information and high-dimensional features are essential to improve the classification for our task. Besides, we observed a significant improvement when the most discriminative layers expressed more impact in the training and classification of ResNet-50 for Barrett’s esophagus and adenocarcinoma classification, demonstrating that both human knowledge and computational processing may influence the correct learning of such a problem. KW - Multistep training KW - Barrett’s esophagus detection KW - Convolutional neural networks KW - Deep learning Y1 - 2024 U6 - https://doi.org/10.1007/s11517-024-03142-8 PB - Springer Nature CY - Heidelberg ER -