TY - CHAP A1 - Palm, Christoph A1 - Scholl, Ingrid A1 - Lehmann, Thomas M. A1 - Spitzer, Klaus ED - Lehmann, Thomas M. ED - Scholl, Ingrid ED - Spitzer, Klaus T1 - Trennung von diffuser und spiegelnder Reflexion in Farbbildern des Larynx zur Untersuchung von Farb- und Formmerkmalen der Stimmlippen T2 - Bildverarbeitung für die Medizin. Algorithmen, Systeme, Anwendungen. Proceedings des Aachener Workshops am 8. und 9. November 1996 N2 - Zur diagnostischen Unterstützung bei der Befundung laryngealer Erkrankungen soll eine Farb- und Formanalyse der Stimmlippen durchgeführt werden. In diesem Beitrag wird ein Verfahren zur Trennung der spiegelnden und diffusen Reflexionsanteile in Farbbildern des Larynx vorgestellt. Die Farbe der diffusen Komponente entspricht dabei der beleuchtungsunabhängigen Objektfarbe, während deren Wichtungsfaktoren als Eingabe für Shape-from-Shading-Verfahren zur Oberflächenrekonstruktion dienen. KW - Laryngoskopie KW - Farbbild KW - Reflexion Y1 - 1996 UR - https://scholar.google.de/citations?user=nc0XkcMAAAAJ&hl=fa#d=gs_md_cita-d&u=%2Fcitations%3Fview_op%3Dview_citation%26hl%3Dfa%26user%3Dnc0XkcMAAAAJ%26citation_for_view%3Dnc0XkcMAAAAJ%3AqjMakFHDy7sC%26tzom%3D-120 SP - 229 EP - 234 PB - Verlag der Augustinus-Buchhandlung CY - Aachen ER - TY - CHAP A1 - Palm, Christoph A1 - Lehmann, Thomas M. A1 - Bredno, J. A1 - Neuschaefer-Rube, C. A1 - Klajman, S. A1 - Spitzer, Klaus T1 - Automated Analysis of Stroboscopic Image Sequences by Vibration Profiles T2 - Advances in Quantitative Laryngoscopy, Voice and Speech Research, Procs. 5th International Workshop N2 - A method for automated segmentation of vocal cords in stroboscopic video sequences is presented. In contrast to earlier approaches, the inner and outer contours of the vocal cords are independently delineated. Automatic segmentation of the low contrasted images is carried out by connecting the shape constraint of a point distribution model to a multi-channel regionbased balloon model. This enables us to robustly compute a vibration profile that is used as a new diagnostic tool to visualize several vibration parameters in only one graphic. The vibration profiles are studied in two cases: one physiological vibration and one functional pathology. KW - Vibration Profile KW - Stroboscopic Images KW - Contour Detection KW - Balloon Model KW - Point Distribution Model Y1 - 2001 UR - https://www.researchgate.net/publication/242439073_Automated_Analysis_of_Stroboscopic_Image_Sequences_by_Vibration_Profiles ER - TY - CHAP A1 - Palm, Christoph T1 - Fusion of Serial 2D Section Images and MRI Reference BT - an Overview T2 - Workshop Innovative Verarbeitung bioelektrischer und biomagnetischer Signale (bbs2014), Berlin, 10.04.2014 N2 - Serial 2D section images with high resolution, resulting from innovative imaging methods become even more valuable, if they are fused with in vivo volumes. Achieving this goal, the 3D context of the sections would be restored, the deformations would be corrected and the artefacts would be eliminated. However, the registration in this field faces big challenges and is not solved in general. On the other hand, several approaches have been introduced dealing at least with some of these difficulties. Here, a brief overview of the topic is given and some of the solutions are presented. It does not constitute the claim to be a complete review, but could be a starting point for those who are interested in this field. KW - Kernspintomografie KW - Optimierung KW - Magnetic Resonance Imaging KW - MRI KW - Literaturbericht Y1 - 2014 U6 - https://doi.org/10.13140/RG.2.1.1358.3449 ER - TY - GEN A1 - Weigert, Markus A1 - Beyer, Thomas A1 - Quick, Harald H. A1 - Pietrzyk, Uwe A1 - Palm, Christoph A1 - Müller, Stefan P. T1 - Generation of a MRI reference data set for the validation of automatic, non-rigid image co-registration algorithms T2 - Nuklearmedizin KW - Kernspintomografie KW - Referenzdaten KW - Registrierung KW - Algorithmus Y1 - 2007 VL - 46 IS - 2 SP - A116 ER - TY - GEN A1 - Weigert, Markus A1 - Palm, Christoph A1 - Quick, Harald H. A1 - Müller, Stefan P. A1 - Pietrzyk, Uwe A1 - Beyer, Thomas T1 - Template for MR-based attenuation correction for whole-body PET/MR imaging T2 - Nuklearmedizin KW - Kernspintomografie KW - Positronen-Emissions-Tomografie KW - Bildgebendes Verfahren KW - Schwächung Y1 - 2007 VL - 46 IS - 2 SP - A115 ER - TY - GEN A1 - Palm, Christoph A1 - Crum, William R. A1 - Pietrzyk, Uwe A1 - Hawkes, David J. T1 - Application of Fluid and Elastic Registration Methods to Histological Rat Brain Sections T2 - Biomedizinische Technik KW - Registrierung KW - Gehirn KW - Schnittdarstellung Y1 - 2007 VL - 52 IS - Suppl. ER - TY - JOUR A1 - Palm, Christoph A1 - Lehmann, Thomas M. T1 - Classification of Color Textures by Gabor Filtering JF - Machine GRAPHICS & VISION Y1 - 2002 VL - 11 IS - 2/3 SP - 195 EP - 219 ER - TY - JOUR A1 - Maier, Johannes A1 - Weiherer, Maximilian A1 - Huber, Michaela A1 - Palm, Christoph T1 - Optically tracked and 3D printed haptic phantom hand for surgical training system JF - Quantitative Imaging in Medicine and Surgery N2 - Background: For surgical fixation of bone fractures of the human hand, so-called Kirschner-wires (K-wires) are drilled through bone fragments. Due to the minimally invasive drilling procedures without a view of risk structures like vessels and nerves, a thorough training of young surgeons is necessary. For the development of a virtual reality (VR) based training system, a three-dimensional (3D) printed phantom hand is required. To ensure an intuitive operation, this phantom hand has to be realistic in both, its position relative to the driller as well as in its haptic features. The softest 3D printing material available on the market, however, is too hard to imitate human soft tissue. Therefore, a support-material (SUP) filled metamaterial is used to soften the raw material. Realistic haptic features are important to palpate protrusions of the bone to determine the drilling starting point and angle. An optical real-time tracking is used to transfer position and rotation to the training system. Methods: A metamaterial already developed in previous work is further improved by use of a new unit cell. Thus, the amount of SUP within the volume can be increased and the tissue is softened further. In addition, the human anatomy is transferred to the entire hand model. A subcutaneous fat layer and penetration of air through pores into the volume simulate shiftability of skin layers. For optical tracking, a rotationally symmetrical marker attached to the phantom hand with corresponding reference marker is developed. In order to ensure trouble-free position transmission, various types of marker point applications are tested. Results: Several cuboid and forearm sample prints lead to a final 30 centimeter long hand model. The whole haptic phantom could be printed faultless within about 17 hours. The metamaterial consisting of the new unit cell results in an increased SUP share of 4.32%. Validated by an expert surgeon study, this allows in combination with a displacement of the uppermost skin layer a good palpability of the bones. Tracking of the hand marker in dodecahedron design works trouble-free in conjunction with a reference marker attached to the worktop of the training system. Conclusions: In this work, an optically tracked and haptically correct phantom hand was developed using dual-material 3D printing, which can be easily integrated into a surgical training system. KW - Handchirurgie KW - 3D-Druck KW - Lernprogramm KW - Zielverfolgung KW - HaptiVisT KW - Dual-material 3D printing KW - hand surgery training KW - metamaterial KW - tissue imitating phantom hand Y1 - 2020 U6 - https://doi.org/10.21037/qims.2019.12.03 N1 - Corresponding author: Christoph Palm VL - 10 IS - 02 SP - 340 EP - 455 PB - AME Publishing Company CY - Hong Kong, China ER - TY - GEN A1 - Bauer, Dagmar A1 - Stoffels, Gabriele A1 - Pauleit, Dirk A1 - Palm, Christoph A1 - Hamacher, Kurt A1 - Coenen, Heinz H. A1 - Langen, Karl T1 - Uptake of F-18-fluoroethyl-L-tyrosine and H-3-L-methionine in focal cortical ischemia T2 - The Journal of Nuclear Medicine N2 - Objectives: C-11-methionine (MET) is particularly useful in brain tumor diagnosis but unspecific uptake e.g. in cerebral ischemia has been reported (1). The F-18-labeled amino acid O-(2-[F-18]fluoroethyl)-L-tyrosine (FET) shows a similar clinical potential as MET in brain tumor diagnosis but is applicable on a wider clinical scale. The aim of this study was to evaluate the uptake of FET and H-3-MET in focal cortical ischemia in rats by dual tracer autoradiography. Methods: Focal cortical ischemia was induced in 12 Fisher CDF rats using the photothrombosis model (PT). One day (n=3) , two days (n=5) and 7 days (n=4) after induction of the lesion FET and H-3-MET were injected intravenously. One hour after tracer injection animals were killed, the brains were removed immediately and frozen in 2-methylbutane at -50°C. Brains were cut in coronal sections (thickness: 20 µm) and exposed first to H-3 insensitive photoimager plates to measure FET distribution. After decay of F-18 the distribution of H-3-MET was determined. The autoradiograms were evaluated by regions of interest (ROIs) placed on areas with increased tracer uptake in the PT and the contralateral brain. Lesion to brain ratios (L/B) were calculated by dividing the mean uptake in the lesion and the brain. Based on previous studies in gliomas a L/B ratio > 1.6 was considered as pathological for FET. Results: Variable increased uptake of both tracers was observed in the PT and its demarcation zone at all stages after PT. The cut-off level of 1.6 for FET was exceeded in 9/12 animals. One day after PT the L/B ratios were 2.0 ± 0.6 for FET vs. 2.1 ± 1.0 for MET (mean ± SD); two days after lesion 2.2 ± 0.7 for FET vs. 2.7 ± 1.0 for MET and 7 days after lesion 2.4 ± 0.4 for FET vs. 2.4 ± 0.1 for MET. In single cases discrepancies in the uptake pattern of FET and MET were observed. Conclusions: FET like MET may exhibit significant uptake in infarcted areas or the immediate vincinity which has to be considered in the differential diagnosis of unkown brain lesions. The discrepancies in the uptake pattern of FET and MET in some cases indicates either differences in the transport mechanisms of both amino acids or a different affinity for certain cellular components. Y1 - 2006 UR - http://jnm.snmjournals.org/content/47/suppl_1/284P.3 VL - 47 IS - Suppl. 1 SP - 284P ER - TY - JOUR A1 - Hutterer, Markus A1 - Hattingen, Elke A1 - Palm, Christoph A1 - Proescholdt, Martin Andreas A1 - Hau, Peter T1 - Current standards and new concepts in MRI and PET response assessment of antiangiogenic therapies in high-grade glioma patients JF - Neuro-Oncology N2 - Despite multimodal treatment, the prognosis of high-grade gliomas is grim. As tumor growth is critically dependent on new blood vessel formation, antiangiogenic treatment approaches offer an innovative treatment strategy. Bevacizumab, a humanized monoclonal antibody, has been in the spotlight of antiangiogenic approaches for several years. Currently, MRI including contrast-enhanced T1-weighted and T2/fluid-attenuated inversion recovery (FLAIR) images is routinely used to evaluate antiangiogenic treatment response (Response Assessment in Neuro-Oncology criteria). However, by restoring the blood–brain barrier, bevacizumab may reduce T1 contrast enhancement and T2/FLAIR hyperintensity, thereby obscuring the imaging-based detection of progression. The aim of this review is to highlight the recent role of imaging biomarkers from MR and PET imaging on measurement of disease progression and treatment effectiveness in antiangiogenic therapies. Based on the reviewed studies, multimodal imaging combining standard MRI with new physiological MRI techniques and metabolic PET imaging, in particular amino acid tracers, may have the ability to detect antiangiogenic drug susceptibility or resistance prior to morphological changes. As advances occur in the development of therapies that target specific biochemical or molecular pathways and alter tumor physiology in potentially predictable ways, the validation of physiological and metabolic imaging biomarkers will become increasingly important in the near future. KW - High-grade glioma KW - Antiangiogenic treatment KW - MRI KW - PET KW - Multimodal response assessment KW - Gliom KW - Antiangiogenese KW - Bildgebendes Verfahren KW - Biomarker Y1 - 2015 U6 - https://doi.org/10.1093/neuonc/nou322 VL - 17 IS - 6 SP - 784 EP - 800 ER - TY - CHAP A1 - Palm, Christoph A1 - Lehmann, Thomas M. A1 - Spitzer, Klaus T1 - Bestimmung der Lichtquellenfarbe bei der Endoskopie mikrotexturierter Oberflächen des Kehlkopfes T2 - 5. Workshop Farbbildverarbeitung, Ilmenau, 1999 N2 - Zur Unterstützung der Diagnose von Stimmlippenerkrankungen werden innerhalb des Forschungsprojektes Quantitative Digitale Laryngoskopie objektive Parameter zur Beschreibung der Bewegung, der Farbe sowie der Form der Stimmlippen entwickelt und klinisch evaluiert. Während die Bewegungsanalyse Aufschluß über funktionelle Stimmstörungen gibt, beschreiben Parameter der Farb- und Formanalyse morphologische Veränderungen des Stimmlippengewebes. In diesem Beitrag werden die Methoden und bisherigen Ergebnisse zur Bewegungs- und Farbanalyse vorgestellt. Die Bewegungsanalyse wurde mit einem erweiterten Konturmodell (Snakes) durchgeführt. Aufgrund des modifizierten Konturmodells konnten die Konturen der Stimmlippen automatisch über die gesmate Bildsequenz zuverlässig detektiert werden. Die Vermssung der Konturen liefert neue quantitative Parameter zur Befundung von laryngoskopischen Stimmlippenaufnahmen. Um die Farbeigenschaften der Stimmlippen zu bestimmen, wurde ausgehend vom RGB-Bild die Objektfarbe unabhängig von der Farbe der Lichtquelle durch Verwendung von Clusterverfahren und der Viertelkreisanalyse berechnet. Mit dieser Farbanalyse konnte die Farbe der Lichtquelle ermittelt und das beleuchtungsunabhängige Farbbild berechnet werden. Die Quanitifizierung der Rötung der Stimmlippen ist z.B. ein entscheidendes Kriterium zur Diagnostik der akuten Laryngitis. KW - Konturverfolgung KW - Snakes KW - Dichromatisches Reflexionsmodell KW - Farbkonstanz KW - Laryngoskopie Y1 - 1999 UR - http://www.germancolorgroup.de/html/Vortr_99_pdf/01_Palm.pdf SP - 3 EP - 10 ER - TY - CHAP A1 - Zehner, Alexander A1 - Szalo, Alexander Eduard A1 - Palm, Christoph T1 - GraphMIC: Easy Prototyping of Medical Image Computing Applications T2 - Interactive Medical Image Computing (IMIC), Workshop at the Medical Image Computing and Computer Assisted Interventions (MICCAI 2015), 2015, Munich N2 - GraphMIC is a cross-platform image processing application utilizing the libraries ITK and OpenCV. The abstract structure of image processing pipelines is visually represented by user interface components based on modern QtQuick technology and allows users to focus on arrangement and parameterization of operations rather than implementing the equivalent functionality natively in C++. The application's central goal is to improve and simplify the typical workflow by providing various high level features and functions like multi threading, image sequence processing and advanced error handling. A built-in python interpreter allows the creation of custom nodes, where user defined algorithms can be integrated to extend basic functionality. An embedded 2d/3d visual-izer gives feedback of the resulting image of an operation or the whole pipeline. User inputs like seed points, contours or regions are forwarded to the processing pipeline as parameters to offer semi-automatic image computing. We report the main concept of the application and introduce several features and their implementation. Finally, the current state of development as well as future perspectives of GraphMIC are discussed KW - Bildverarbeitung KW - Medizin Y1 - 2015 U6 - https://doi.org/10.13140/RG.2.1.3718.4725 N1 - Open-Access-Publikation SP - 395 EP - 400 ER - TY - CHAP A1 - Palm, Christoph A1 - Fischer, B. A1 - Lehmann, Thomas M. A1 - Spitzer, Klaus T1 - Hierarchische Wasserscheiden-Transformation zur Lippensegmentierung in Farbbildern T2 - Bildverarbeitung für die Medizin 2000 N2 - Zur Lösung komplexer Segmentierungsprobleme wird eine hierarchische und farbbasierte Wasserscheidentransformation vorgestellt. Geringe Modifikationen bezüglich Startpunktwahl und Flutungsprozess resultieren in signifikanten Verbesserungen der Segmentierung. Das Verfahren wurde zur Lippendetektion in Farbbildsequenzen eingesetzt, die zur quantitativen Beschreibung von Sprechbewegungsabläufen automatisch ausgewertet werden. Die Experimente mit 245 Bildern aus 6 Sequenzen zeigten eine Fehlerrate von 13%. KW - Hierarchische Wasserscheiden-Transformation KW - Segmentierung der Lippen KW - Bewegungsanalyse KW - Farbbildverarbeitung Y1 - 2000 U6 - https://doi.org/10.1007/978-3-642-59757-2_20 SP - 106 EP - 110 PB - Springer CY - Berlin ER - TY - CHAP A1 - Palm, Christoph A1 - Lehmann, Thomas M. A1 - Spitzer, Klaus T1 - Color Texture Analysis of Moving Vocal Cords Using Approaches from Statistics and Signal Theory T2 - Advances in Quantitative Laryngoscopy, Voice and Speech Research, Procs. 4th International Workshop, Friedrich Schiller University, Jena N2 - Textural features are applied for detection of morphological pathologies of vocal cords. Cooccurrence matrices as statistical features are presented as well as filter bank analysis by Gabor filters. Both methods are extended to handle color images. Their robustness against camera movement and vibration of vocal cords is evaluated. Classification results due to three in vivo sequences are in between 94.4 % and 98.9%. The classification errors decrease if color features are used instead of grayscale features for both statistical and Fourier features KW - Color Texture KW - Gabor Filter KW - Cooccurrence Matrix KW - Image Processing Y1 - 2000 SP - 49 EP - 56 ER - TY - CHAP A1 - Palm, Christoph A1 - Schanze, Thomas T1 - Biomedical Image and Signal Computing (BISC 2013) T2 - 58. Jahrestagung der Deutschen Gesellschaft für Medizinische Informatik, Biometrie und Epidemiologie e.V. (GMDS 2013), Lübeck, 01.-05.09.2013 Y1 - 2013 U6 - https://doi.org/doi:10.3205/13gmds257 N1 - Meeting Abstract IS - DocAbstr. 324 PB - German Medical Science GMS Publishing House CY - Düsseldorf ER - TY - BOOK A1 - Palm, Christoph T1 - Integrative Auswertung von Farbe und Textur Y1 - 2003 UR - http://publications.rwth-aachen.de/record/58707/files/Palm_Christoph.pdf PB - Der Andere Verlag ER - TY - JOUR A1 - Graßmann, Felix A1 - Mengelkamp, Judith A1 - Brandl, Caroline A1 - Harsch, Sebastian A1 - Zimmermann, Martina E. A1 - Linkohr, Birgit A1 - Peters, Annette A1 - Heid, Iris M. A1 - Palm, Christoph A1 - Weber, Bernhard H. F. T1 - A Deep Learning Algorithm for Prediction of Age-Related Eye Disease Study Severity Scale for Age-Related Macular Degeneration from Color Fundus Photography JF - Ophtalmology N2 - Purpose Age-related macular degeneration (AMD) is a common threat to vision. While classification of disease stages is critical to understanding disease risk and progression, several systems based on color fundus photographs are known. Most of these require in-depth and time-consuming analysis of fundus images. Herein, we present an automated computer-based classification algorithm. Design Algorithm development for AMD classification based on a large collection of color fundus images. Validation is performed on a cross-sectional, population-based study. Participants. We included 120 656 manually graded color fundus images from 3654 Age-Related Eye Disease Study (AREDS) participants. AREDS participants were >55 years of age, and non-AMD sight-threatening diseases were excluded at recruitment. In addition, performance of our algorithm was evaluated in 5555 fundus images from the population-based Kooperative Gesundheitsforschung in der Region Augsburg (KORA; Cooperative Health Research in the Region of Augsburg) study. Methods. We defined 13 classes (9 AREDS steps, 3 late AMD stages, and 1 for ungradable images) and trained several convolution deep learning architectures. An ensemble of network architectures improved prediction accuracy. An independent dataset was used to evaluate the performance of our algorithm in a population-based study. Main Outcome Measures. κ Statistics and accuracy to evaluate the concordance between predicted and expert human grader classification. Results. A network ensemble of 6 different neural net architectures predicted the 13 classes in the AREDS test set with a quadratic weighted κ of 92% (95% confidence interval, 89%–92%) and an overall accuracy of 63.3%. In the independent KORA dataset, images wrongly classified as AMD were mainly the result of a macular reflex observed in young individuals. By restricting the KORA analysis to individuals >55 years of age and prior exclusion of other retinopathies, the weighted and unweighted κ increased to 50% and 63%, respectively. Importantly, the algorithm detected 84.2% of all fundus images with definite signs of early or late AMD. Overall, 94.3% of healthy fundus images were classified correctly. Conclusions Our deep learning algoritm revealed a weighted κ outperforming human graders in the AREDS study and is suitable to classify AMD fundus images in other datasets using individuals >55 years of age. KW - Senile Makuladegeneration KW - Krankheitsverlauf KW - Mustererkennung KW - Maschinelles Lernen Y1 - 2018 U6 - https://doi.org/10.1016/j.ophtha.2018.02.037 N1 - Corresponding authors: Bernhard H. F. Weber, University of Regensburg, and Christoph Palm VL - 125 IS - 9 SP - 1410 EP - 1420 PB - Elsevier ER - TY - JOUR A1 - Palm, Christoph T1 - Color Texture Classification by Integrative Co-Occurrence Matrices JF - Pattern Recognition N2 - Integrative Co-occurrence matrices are introduced as novel features for color texture classification. The extended Co-occurrence notation allows the comparison between integrative and parallel color texture concepts. The information profit of the new matrices is shown quantitatively using the Kolmogorov distance and by extensive classification experiments on two datasets. Applying them to the RGB and the LUV color space the combined color and intensity textures are studied and the existence of intensity independent pure color patterns is demonstrated. The results are compared with two baselines: gray-scale texture analysis and color histogram analysis. The novel features improve the classification results up to 20% and 32% for the first and second baseline, respectively. KW - Color texture KW - Co-occurrence matrix KW - Integrative features KW - KolmogKorov distance KW - Image classification Y1 - 2004 U6 - https://doi.org/10.1016/j.patcog.2003.09.010 VL - 37 IS - 5 SP - 965 EP - 976 ER - TY - JOUR A1 - Beyer, Thomas A1 - Weigert, Markus A1 - Quick, Harald H. A1 - Pietrzyk, Uwe A1 - Vogt, Florian A1 - Palm, Christoph A1 - Antoch, Gerald A1 - Müller, Stefan P. A1 - Bockisch, Andreas T1 - MR-based attenuation correction for torso-PET/MR imaging BT - pitfalls in mapping MR to CT data JF - European Journal of Nuclear Medicine and Molecular Imaging N2 - Purpose MR-based attenuation correction (AC) will become an integral part of combined PET/MR systems. Here, we propose a toolbox to validate MR-AC of clinical PET/MRI data sets. Methods Torso scans of ten patients were acquired on a combined PET/CT and on a 1.5-T MRI system. MR-based attenuation data were derived from the CT following MR–CT image co-registration and subsequent histogram matching. PET images were reconstructed after CT- (PET/CT) and MR-based AC (PET/MRI). Lesion-to-background (L/B) ratios were estimated on PET/CT and PET/MRI. Results MR–CT histogram matching leads to a mean voxel intensity difference in the CT- and MR-based attenuation images of 12% (max). Mean differences between PET/MRI and PET/CT were 19% (max). L/B ratios were similar except for the lung where local misregistration and intensity transformation leads to a biased PET/MRI. Conclusion Our toolbox can be used to study pitfalls in MR-AC. We found that co-registration accuracy and pixel value transformation determine the accuracy of PET/MRI. KW - PET/MRI KW - PET/CT KW - Attenuation correction KW - Kernspintomografie KW - Positronen-Emissions-Tomografie KW - Schwächung Y1 - 2008 U6 - https://doi.org/10.1007/s00259-008-0734-0 VL - 35 IS - 6 SP - 1142 EP - 1146 ER - TY - JOUR A1 - Becker, Johanna Sabine A1 - Matusch, Andreas A1 - Becker, Julia Susanne A1 - Wu, Bei A1 - Palm, Christoph A1 - Becker, Albert Johann A1 - Salber, Dagmar T1 - Mass spectrometric imaging (MSI) of metals using advanced BrainMet techniques for biomedical research JF - International Journal of Mass Spectrometry N2 - Mass spectrometric imaging (MSI) is a young innovative analytical technique and combines different fields of advanced mass spectrometry and biomedical research with the aim to provide maps of elements and molecules, complexes or fragments. Especially essential metals such as zinc, copper, iron and manganese play a functional role in signaling, metabolism and homeostasis of the cell. Due to the high degree of spatial organization of metals in biological systems their distribution analysis is of key interest in life sciences. We have developed analytical techniques termed BrainMet using laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) imaging to measure the distribution of trace metals in biological tissues for biomedical research and feasibility studies—including bioaccumulation and bioavailability studies, ecological risk assessment and toxicity studies in humans and other organisms. The analytical BrainMet techniques provide quantitative images of metal distributions in brain tissue slices which can be combined with other imaging modalities such as photomicrography of native or processed tissue (histochemistry, immunostaining) and autoradiography or with in vivo techniques such as positron emission tomography or magnetic resonance tomography. Prospective and instrumental developments will be discussed concerning the development of the metalloprotein microscopy using a laser microdissection (LMD) apparatus for specific sample introduction into an inductively coupled plasma mass spectrometer (LMD-ICP-MS) or an application of the near field effect in LA-ICP-MS (NF-LA-ICP-MS). These nano-scale mass spectrometric techniques provide improved spatial resolution down to the single cell level. KW - Bioimaging KW - Brain tissue KW - Laser ablation inductively coupled plasma mass spectrometry KW - Laser microdissection inductively coupled plasma mass spectrometry KW - Metals KW - Metallomics KW - Nano-LA-ICP-MS KW - Tumour KW - Massenspektrometrie KW - Bildgebendes Verfahren KW - Metalle KW - Metallproteide KW - Gehirn Y1 - 2011 U6 - https://doi.org/10.1016/j.ijms.2011.01.015 VL - 307 IS - 1-3 SP - 3 EP - 15 PB - eLSEVIER CY - Elsevier ER - TY - CHAP A1 - Metzler, V. A1 - Aach, T. A1 - Palm, Christoph A1 - Lehmann, Thomas M. T1 - Texture Classification of Graylevel Images by Multiscale Cross-Co-Occurrence Matrices T2 - Proceedings 15th International Conference on Pattern Recognition (ICPR-2000) N2 - Local gray level dependencies of natural images can be modelled by means of co-occurrence matrices containing joint probabilities of gray-level pairs. Texture, however, is a resolution-dependent phenomenon and hence, classification depends on the chosen scale. Since there is no optimal scale for all textures we employ a multiscale approach that acquires textural features at several scales. Thus linear and nonlinear scale-spaces are analyzed by multiscale co-occurrence matrices that describe the statistical behavior of a texture in scale-space. Classification is then performed on the basis of texture features taken from the individual scale with the highest discriminatory power. By considering cross-scale occurrences of gray level pairs, the impact of filters on the feature is described and used for classification of natural textures. This novel method was found to improve classification rates of the common co-occurrence matrix approach on standard textures significantly. Y1 - 2000 U6 - https://doi.org/10.1109/ICPR.2000.906133 SP - 549 EP - 552 ER - TY - GEN A1 - Axer, Markus A1 - Axer, Hubertus A1 - Palm, Christoph A1 - Gräßel, David A1 - Zilles, Karl A1 - Pietrzyk, Uwe T1 - Visualization of Nerve Fibre Orientation in the Visual Cortex of the Human Brain by Means of Polarized Light T2 - Biomedizinische Technik KW - Sehrinde KW - Nervenfaser KW - Ausrichtung KW - Visualisierung KW - Polarisiertes Licht Y1 - 2007 VL - 52 IS - Suppl. SP - 1569048-041 ER - TY - CHAP A1 - Maier, Johannes A1 - Haug, Sonja A1 - Huber, Michaela A1 - Katzky, Uwe A1 - Neumann, Sabine A1 - Perret, Jérôme A1 - Prinzen, Martin A1 - Weber, Karsten A1 - Wittenberg, Thomas A1 - Wöhl, Rebecca A1 - Scorna, Ulrike A1 - Palm, Christoph T1 - Development of a haptic and visual assisted training simulation concept for complex bone drilling in minimally invasive hand surgery T2 - CARS Conference, 5.10.-7.10.2017 Y1 - 2017 ER - TY - JOUR A1 - Scheppach, Markus W. A1 - Rauber, David A1 - Stallhofer, Johannes A1 - Muzalyova, Anna A1 - Otten, Vera A1 - Manzeneder, Carolin A1 - Schwamberger, Tanja A1 - Wanzl, Julia A1 - Schlottmann, Jakob A1 - Tadic, Vidan A1 - Probst, Andreas A1 - Schnoy, Elisabeth A1 - Römmele, Christoph A1 - Fleischmann, Carola A1 - Meinikheim, Michael A1 - Miller, Silvia A1 - Märkl, Bruno A1 - Stallmach, Andreas A1 - Palm, Christoph A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - Detection of duodenal villous atrophy on endoscopic images using a deep learning algorithm JF - Gastrointestinal Endoscopy N2 - Background and aims Celiac disease with its endoscopic manifestation of villous atrophy is underdiagnosed worldwide. The application of artificial intelligence (AI) for the macroscopic detection of villous atrophy at routine esophagogastroduodenoscopy may improve diagnostic performance. Methods A dataset of 858 endoscopic images of 182 patients with villous atrophy and 846 images from 323 patients with normal duodenal mucosa was collected and used to train a ResNet 18 deep learning model to detect villous atrophy. An external data set was used to test the algorithm, in addition to six fellows and four board certified gastroenterologists. Fellows could consult the AI algorithm’s result during the test. From their consultation distribution, a stratification of test images into “easy” and “difficult” was performed and used for classified performance measurement. Results External validation of the AI algorithm yielded values of 90 %, 76 %, and 84 % for sensitivity, specificity, and accuracy, respectively. Fellows scored values of 63 %, 72 % and 67 %, while the corresponding values in experts were 72 %, 69 % and 71 %, respectively. AI consultation significantly improved all trainee performance statistics. While fellows and experts showed significantly lower performance for “difficult” images, the performance of the AI algorithm was stable. Conclusion In this study, an AI algorithm outperformed endoscopy fellows and experts in the detection of villous atrophy on endoscopic still images. AI decision support significantly improved the performance of non-expert endoscopists. The stable performance on “difficult” images suggests a further positive add-on effect in challenging cases. KW - celiac disease KW - villous atrophy KW - endoscopy detection KW - artificial intelligence Y1 - 2023 U6 - https://doi.org/10.1016/j.gie.2023.01.006 PB - Elsevier ER - TY - JOUR A1 - Knoedler, Leonard A1 - Baecher, Helena A1 - Kauke-Navarro, Martin A1 - Prantl, Lukas A1 - Machens, Hans-Günther A1 - Scheuermann, Philipp A1 - Palm, Christoph A1 - Baumann, Raphael A1 - Kehrer, Andreas A1 - Panayi, Adriana C. A1 - Knoedler, Samuel T1 - Towards a Reliable and Rapid Automated Grading System in Facial Palsy Patients: Facial Palsy Surgery Meets Computer Science JF - Journal of Clinical Medicine N2 - Background: Reliable, time- and cost-effective, and clinician-friendly diagnostic tools are cornerstones in facial palsy (FP) patient management. Different automated FP grading systems have been developed but revealed persisting downsides such as insufficient accuracy and cost-intensive hardware. We aimed to overcome these barriers and programmed an automated grading system for FP patients utilizing the House and Brackmann scale (HBS). Methods: Image datasets of 86 patients seen at the Department of Plastic, Hand, and Reconstructive Surgery at the University Hospital Regensburg, Germany, between June 2017 and May 2021, were used to train the neural network and evaluate its accuracy. Nine facial poses per patient were analyzed by the algorithm. Results: The algorithm showed an accuracy of 100%. Oversampling did not result in altered outcomes, while the direct form displayed superior accuracy levels when compared to the modular classification form (n = 86; 100% vs. 99%). The Early Fusion technique was linked to improved accuracy outcomes in comparison to the Late Fusion and sequential method (n = 86; 100% vs. 96% vs. 97%). Conclusions: Our automated FP grading system combines high-level accuracy with cost- and time-effectiveness. Our algorithm may accelerate the grading process in FP patients and facilitate the FP surgeon’s workflow. Y1 - 2022 U6 - https://doi.org/10.3390/jcm11174998 VL - 11 IS - 17 PB - MDPI CY - Basel ER - TY - INPR A1 - Rueckert, Tobias A1 - Rueckert, Daniel A1 - Palm, Christoph T1 - Methods and datasets for segmentation of minimally invasive surgical instruments in endoscopic images and videos: A review of the state of the art N2 - In the field of computer- and robot-assisted minimally invasive surgery, enormous progress has been made in recent years based on the recognition of surgical instruments in endoscopic images. Especially the determination of the position and type of the instruments is of great interest here. Current work involves both spatial and temporal information with the idea, that the prediction of movement of surgical tools over time may improve the quality of final segmentations. The provision of publicly available datasets has recently encouraged the development of new methods, mainly based on deep learning. In this review, we identify datasets used for method development and evaluation, as well as quantify their frequency of use in the literature. We further present an overview of the current state of research regarding the segmentation and tracking of minimally invasive surgical instruments in endoscopic images. The paper focuses on methods that work purely visually without attached markers of any kind on the instruments, taking into account both single-frame segmentation approaches as well as those involving temporal information. A discussion of the reviewed literature is provided, highlighting existing shortcomings and emphasizing available potential for future developments. The publications considered were identified through the platforms Google Scholar, Web of Science, and PubMed. The search terms used were "instrument segmentation", "instrument tracking", "surgical tool segmentation", and "surgical tool tracking" and result in 408 articles published between 2015 and 2022 from which 109 were included using systematic selection criteria. Y1 - 2023 U6 - https://doi.org/10.48550/arXiv.2304.13014 ER - TY - CHAP A1 - Mendel, Robert A1 - Rauber, David A1 - Palm, Christoph T1 - Exploring the Effects of Contrastive Learning on Homogeneous Medical Image Data T2 - Bildverarbeitung für die Medizin 2023: Proceedings, German Workshop on Medical Image Computing, July 2– 4, 2023, Braunschweig N2 - We investigate contrastive learning in a multi-task learning setting classifying and segmenting early Barrett’s cancer. How can contrastive learning be applied in a domain with few classes and low inter-class and inter-sample variance, potentially enabling image retrieval or image attribution? We introduce a data sampling strategy that mines per-lesion data for positive samples and keeps a queue of the recent projections as negative samples. We propose a masking strategy for the NT-Xent loss that keeps the negative set pure and removes samples from the same lesion. We show cohesion and uniqueness improvements of the proposed method in feature space. The introduction of the auxiliary objective does not affect the performance but adds the ability to indicate similarity between lesions. Therefore, the approach could enable downstream auto-documentation tasks on homogeneous medical image data. Y1 - 2023 U6 - https://doi.org/10.1007/978-3-658-41657-7 SP - 128 EP - 13 PB - Springer Vieweg CY - Wiesbaden ER - TY - GEN A1 - Zellmer, Stephan A1 - Rauber, David A1 - Probst, Andreas A1 - Weber, Tobias A1 - Nagl, Sandra A1 - Römmele, Christoph A1 - Schnoy, Elisabeth A1 - Palm, Christoph A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - Verwendung künstlicher Intelligenz bei der Detektion der Papilla duodeni major T2 - Zeitschrift für Gastroenterologie N2 - Einleitung Die Endoskopische Retrograde Cholangiopankreatikographie (ERCP) ist der Goldstandard in der Diagnostik und Therapie von Erkrankungen des pankreatobiliären Trakts. Jedoch ist sie technisch sehr anspruchsvoll und weist eine vergleichsweise hohe Komplikationsrate auf. Ziele  In der vorliegenden Machbarkeitsstudie soll geprüft werden, ob mithilfe eines Deep-learning-Algorithmus die Papille und das Ostium zuverlässig detektiert werden können und somit für Endoskopiker mit geringer Erfahrung ein geeignetes Hilfsmittel, insbesondere für die Ausbildungssituation, darstellen könnten. Methodik Wir betrachteten insgesamt 606 Bilddatensätze von 65 Patienten. In diesen wurde sowohl die Papilla duodeni major als auch das Ostium segmentiert. Anschließend wurde eine neuronales Netz mittels eines Deep-learning-Algorithmus trainiert. Außerdem erfolgte eine 5-fache Kreuzvaldierung. Ergebnisse Bei einer 5-fachen Kreuzvaldierung auf den 606 gelabelten Daten konnte für die Klasse Papille eine F1-Wert von 0,7908, eine Sensitivität von 0,7943 und eine Spezifität von 0,9785 erreicht werden, für die Klasse Ostium eine F1-Wert von 0,5538, eine Sensitivität von 0,5094 und eine Spezifität von 0,9970 (vgl. [Tab. 1]). Unabhängig von der Klasse zeigte sich gemittelt (Klasse Papille und Klasse Ostium) ein F1-Wert von 0,6673, eine Sensitivität von 0,6519 und eine Spezifität von 0,9877 (vgl. [Tab. 2]). Schlussfolgerung  In vorliegende Machbarkeitsstudie konnte das neuronale Netz die Papilla duodeni major mit einer hohen Sensitivität und sehr hohen Spezifität identifizieren. Bei der Detektion des Ostiums war die Sensitivität deutlich geringer. Zukünftig soll das das neuronale Netz mit mehr Daten trainiert werden. Außerdem ist geplant, den Algorithmus auch auf Videos anzuwenden. Somit könnte langfristig ein geeignetes Hilfsmittel für die ERCP etabliert werden. KW - Künstliche Intelligenz Y1 - 2023 UR - https://www.thieme-connect.de/products/ejournals/abstract/10.1055/s-0043-1772000 U6 - https://doi.org/10.1055/s-0043-1772000 VL - 61 IS - 08 PB - Thieme CY - Stuttgart ER - TY - JOUR A1 - Hammer, Simone A1 - Nunes, Danilo Weber A1 - Hammer, Michael A1 - Zeman, Florian A1 - Akers, Michael A1 - Götz, Andrea A1 - Balla, Annika A1 - Doppler, Michael Christian A1 - Fellner, Claudia A1 - Da Platz Batista Silva, Natascha A1 - Thurn, Sylvia A1 - Verloh, Niklas A1 - Stroszczynski, Christian A1 - Wohlgemuth, Walter Alexander A1 - Palm, Christoph A1 - Uller, Wibke T1 - Deep learning-based differentiation of peripheral high-flow and low-flow vascular malformations in T2-weighted short tau inversion recovery MRI JF - Clinical hemorheology and microcirculation N2 - BACKGROUND Differentiation of high-flow from low-flow vascular malformations (VMs) is crucial for therapeutic management of this orphan disease. OBJECTIVE A convolutional neural network (CNN) was evaluated for differentiation of peripheral vascular malformations (VMs) on T2-weighted short tau inversion recovery (STIR) MRI. METHODS 527 MRIs (386 low-flow and 141 high-flow VMs) were randomly divided into training, validation and test set for this single-center study. 1) Results of the CNN's diagnostic performance were compared with that of two expert and four junior radiologists. 2) The influence of CNN's prediction on the radiologists' performance and diagnostic certainty was evaluated. 3) Junior radiologists' performance after self-training was compared with that of the CNN. RESULTS Compared with the expert radiologists the CNN achieved similar accuracy (92% vs. 97%, p = 0.11), sensitivity (80% vs. 93%, p = 0.16) and specificity (97% vs. 100%, p = 0.50). In comparison to the junior radiologists, the CNN had a higher specificity and accuracy (97% vs. 80%, p <  0.001; 92% vs. 77%, p <  0.001). CNN assistance had no significant influence on their diagnostic performance and certainty. After self-training, the junior radiologists' specificity and accuracy improved and were comparable to that of the CNN. CONCLUSIONS Diagnostic performance of the CNN for differentiating high-flow from low-flow VM was comparable to that of expert radiologists. CNN did not significantly improve the simulated daily practice of junior radiologists, self-training was more effective. KW - magnetic resonance imaging KW - deep learning KW - Vascular malformation Y1 - 2024 U6 - https://doi.org/10.3233/CH-232071 SP - 1 EP - 15 PB - IOP Press ET - Pre-press ER - TY - GEN A1 - Rückert, Tobias A1 - Rieder, Maximilian A1 - Rauber, David A1 - Xiao, Michel A1 - Humolli, Eg A1 - Feussner, Hubertus A1 - Wilhelm, Dirk A1 - Palm, Christoph T1 - Augmenting instrument segmentation in video sequences of minimally invasive surgery by synthetic smoky frames T2 - International Journal of Computer Assisted Radiology and Surgery KW - Surgical instrument segmentation KW - smoke simulation KW - unpaired image-to-image translation KW - robot-assisted surgery Y1 - 2023 U6 - https://doi.org/10.1007/s11548-023-02878-2 VL - 18 IS - Suppl 1 SP - S54 EP - S56 PB - Springer Nature ER - TY - JOUR A1 - Kolev, Kalin A1 - Kirchgeßner, Norbert A1 - Houben, Sebastian A1 - Csiszár, Agnes A1 - Rubner, Wolfgang A1 - Palm, Christoph A1 - Eiben, Björn A1 - Merkel, Rudolf A1 - Cremers, Daniel T1 - A variational approach to vesicle membrane reconstruction from fluorescence imaging JF - Pattern Recognition N2 - Biological applications like vesicle membrane analysis involve the precise segmentation of 3D structures in noisy volumetric data, obtained by techniques like magnetic resonance imaging (MRI) or laser scanning microscopy (LSM). Dealing with such data is a challenging task and requires robust and accurate segmentation methods. In this article, we propose a novel energy model for 3D segmentation fusing various cues like regional intensity subdivision, edge alignment and orientation information. The uniqueness of the approach consists in the definition of a new anisotropic regularizer, which accounts for the unbalanced slicing of the measured volume data, and the generalization of an efficient numerical scheme for solving the arising minimization problem, based on linearization and fixed-point iteration. We show how the proposed energy model can be optimized globally by making use of recent continuous convex relaxation techniques. The accuracy and robustness of the presented approach are demonstrated by evaluating it on multiple real data sets and comparing it to alternative segmentation methods based on level sets. Although the proposed model is designed with focus on the particular application at hand, it is general enough to be applied to a variety of different segmentation tasks. KW - 3D segmentation KW - Convex optimization KW - Vesicle membrane analysis KW - Fluorescence imaging KW - Dreidimensionale Bildverarbeitung KW - Bildsegmentierung KW - Konvexe Optimierung Y1 - 2011 U6 - https://doi.org/10.1016/j.patcog.2011.04.019 VL - 44 IS - 12 SP - 2944 EP - 2958 PB - Elsevier ER - TY - CHAP A1 - Palm, Christoph ED - Byrne, Michael F. ED - Parsa, Nasim ED - Greenhill, Alexandra T. ED - Chahal, Daljeet ED - Ahmad, Omer ED - Bargci, Ulas T1 - History, Core Concepts, and Role of AI in Clinical Medicine T2 - AI in Clinical Medicine: A Practical Guide for Healthcare Professionals N2 - The field of AI is characterized by robust promises, astonishing successes, and remarkable breakthroughs. AI will play a major role in all domains of clinical medicine, but the role of AI in relation to the physician is not yet completely determined. The term artificial intelligence or AI is broad, and several different terms are used in this context that must be organized and demystified. This chapter will review the key concepts and methods of AI, and will introduce some of the different roles for AI in relation to the physician. KW - artificial intelligence KW - healthcare Y1 - 2023 SN - 978-1-119-79064-8 U6 - https://doi.org/10.1002/9781119790686.ch5 SP - 49 EP - 55 PB - Wiley ET - 1. Aufl. ER - TY - JOUR A1 - Ruewe, Marc A1 - Eigenberger, Andreas A1 - Klein, Silvan A1 - von Riedheim, Antonia A1 - Gugg, Christine A1 - Prantl, Lukas A1 - Palm, Christoph A1 - Weiherer, Maximilian A1 - Zeman, Florian A1 - Anker, Alexandra T1 - Precise Monitoring of Returning Sensation in Digital Nerve Lesions by 3-D Imaging: A Proof-of-Concept Study JF - Plastic and Reconstructive Surgery N2 - Digital nerve lesions result in a loss of tactile sensation reflected by an anesthetic area (AA) at the radial or ulnar aspect of the respective digit. Yet, available tools to monitor the recovery of tactile sense have been criticized for their lack of validity. However, the precise quantification of AA dynamics by three-dimensional (3-D) imaging could serve as an accurate surrogate to monitor recovery following digital nerve repair. For validation, AAs were marked on digits of healthy volunteers to simulate the AA of an impaired cutaneous innervation. Three dimensional models were composed from raw images that had been acquired with a 3-D camera (Vectra H2) to precisely quantify relative AA for each digit (3-D models, n= 80). Operator properties varied regarding individual experience in 3-D imaging and image processing. Additionally, the concept was applied in a clinical case study. Images taken by experienced photographers were rated better quality (p< 0.001) and needed less processing time (p= 0.020). Quantification of the relative AA was neither altered significantly by experience levels of the photographer (p= 0.425) nor the image assembler (p= 0.749). The proposed concept allows precise and reliable surface quantification of digits and can be performed consistently without relevant distortion by lack of examiner experience. Routine 3-D imaging of the AA has the great potential to provide visual evidence of various returning states of sensation and to convert sensory nerve recovery into a metric variable with high responsiveness to temporal progress. KW - 3D imaging Y1 - 2023 U6 - https://doi.org/10.1097/PRS.0000000000010456 SN - 1529-4242 VL - 152 IS - 4 SP - 670e EP - 674e PB - Lippincott Williams & Wilkins CY - Philadelphia, Pa. ER - TY - GEN A1 - Scheppach, Markus A1 - Rauber, David A1 - Stallhofer, Johannes A1 - Muzalyova, Anna A1 - Otten, Vera A1 - Manzeneder, Carolin A1 - Schwamberger, Tanja A1 - Wanzl, Julia A1 - Schlottmann, Jakob A1 - Tadic, Vidan A1 - Probst, Andreas A1 - Schnoy, Elisabeth A1 - Römmele, Christoph A1 - Fleischmann, Carola A1 - Meinikheim, Michael A1 - Miller, Silvia A1 - Märkl, Bruno A1 - Palm, Christoph A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - Performance comparison of a deep learning algorithm with endoscopists in the detection of duodenal villous atrophy (VA) T2 - Endoscopy N2 - Aims  VA is an endoscopic finding of celiac disease (CD), which can easily be missed if pretest probability is low. In this study, we aimed to develop an artificial intelligence (AI) algorithm for the detection of villous atrophy on endoscopic images. Methods 858 images from 182 patients with VA and 846 images from 323 patients with normal duodenal mucosa were used for training and internal validation of an AI algorithm (ResNet18). A separate dataset was used for external validation, as well as determination of detection performance of experts, trainees and trainees with AI support. According to the AI consultation distribution, images were stratified into “easy” and “difficult”. Results Internal validation showed 82%, 85% and 84% for sensitivity, specificity and accuracy. External validation showed 90%, 76% and 84%. The algorithm was significantly more sensitive and accurate than trainees, trainees with AI support and experts in endoscopy. AI support in trainees was associated with significantly improved performance. While all endoscopists showed significantly lower detection for “difficult” images, AI performance remained stable. Conclusions The algorithm outperformed trainees and experts in sensitivity and accuracy for VA detection. The significant improvement with AI support suggests a potential clinical benefit. Stable performance of the algorithm in “easy” and “difficult” test images may indicate an advantage in macroscopically challenging cases. Y1 - 2023 U6 - https://doi.org/10.1055/s-0043-1765421 VL - 55 IS - S02 PB - Thieme ER - TY - JOUR A1 - Mang, Andreas A1 - Schnabel, Julia A. A1 - Crum, William R. A1 - Modat, Marc A1 - Camara-Rey, Oscar A1 - Palm, Christoph A1 - Caseiras, Gisele Brasil A1 - Jäger, H. Rolf A1 - Ourselin, Sébastien A1 - Buzug, Thorsten M. A1 - Hawkes, David J. T1 - Consistency of parametric registration in serial MRI studies of brain tumor progression JF - International Journal of Computer Assisted Radiology and Surgery N2 - Object The consistency of parametric registration in multi-temporal magnetic resonance (MR) imaging studies was evaluated. Materials and methods Serial MRI scans of adult patients with a brain tumor (glioma) were aligned by parametric registration. The performance of low-order spatial alignment (6/9/12 degrees of freedom) of different 3D serial MR-weighted images is evaluated. A registration protocol for the alignment of all images to one reference coordinate system at baseline is presented. Registration results were evaluated for both, multimodal intra-timepoint and mono-modal multi-temporal registration. The latter case might present a challenge to automatic intensity-based registration algorithms due to ill-defined correspondences. The performance of our algorithm was assessed by testing the inverse registration consistency. Four different similarity measures were evaluated to assess consistency. Results Careful visual inspection suggests that images are well aligned, but their consistency may be imperfect. Sub-voxel inconsistency within the brain was found for allsimilarity measures used for parametric multi-temporal registration. T1-weighted images were most reliable for establishing spatial correspondence between different timepoints. Conclusions The parametric registration algorithm is feasible for use in this application. The sub-voxel resolution mean displacement error of registration transformations demonstrates that the algorithm converges to an almost identical solution for forward and reverse registration. KW - Inverse registration consistency KW - Parametric serial MR image registration KW - Tumor disease progression KW - Kernspintomografie KW - Registrierung KW - Hirntumor Y1 - 2008 U6 - https://doi.org/10.1007/s11548-008-0234-5 VL - 3 IS - 3-4 SP - 201 EP - 211 ER - TY - CHAP A1 - Chang, Ching-Sheng A1 - Lin, Jin-Fa A1 - Lee, Ming-Ching A1 - Palm, Christoph ED - Tolxdorff, Thomas ED - Deserno, Thomas M. ED - Handels, Heinz ED - Maier, Andreas ED - Maier-Hein, Klaus H. ED - Palm, Christoph T1 - Semantic Lung Segmentation Using Convolutional Neural Networks T2 - Bildverarbeitung für die Medizin 2020. Algorithmen - Systeme - Anwendungen. Proceedings des Workshops vom 15. bis 17. März 2020 in Berlin N2 - Chest X-Ray (CXR) images as part of a non-invasive diagnosis method are commonly used in today’s medical workflow. In traditional methods, physicians usually use their experience to interpret CXR images, however, there is a large interobserver variance. Computer vision may be used as a standard for assisted diagnosis. In this study, we applied an encoder-decoder neural network architecture for automatic lung region detection. We compared a three-class approach (left lung, right lung, background) and a two-class approach (lung, background). The differentiation of left and right lungs as direct result of a semantic segmentation on basis of neural nets rather than post-processing a lung-background segmentation is done here for the first time. Our evaluation was done on the NIH Chest X-ray dataset, from which 1736 images were extracted and manually annotated. We achieved 94:9% mIoU and 92% mIoU as segmentation quality measures for the two-class-model and the three-class-model, respectively. This result is very promising for the segmentation of lung regions having the simultaneous classification of left and right lung in mind. KW - Neuronales Netz KW - Segmentierung KW - Brustkorb KW - Deep Learning KW - Encoder-Decoder Network KW - Chest X-Ray Y1 - 2020 SN - 978-3-658-29266-9 U6 - https://doi.org/10.1007/978-3-658-29267-6_17 SP - 75 EP - 80 PB - Springer Vieweg CY - Wiesbaden ER - TY - CHAP A1 - Weber, Joachim A1 - Brawanski, Alexander A1 - Palm, Christoph T1 - Parallelization of FSL-Fast segmentation of MRI brain data T2 - 58. Jahrestagung der Deutschen Gesellschaft für Medizinische Informatik, Biometrie und Epidemiologie e.V. (GMDS 2013), Lübeck, 01.-05.09.2013 Y1 - 2013 U6 - https://doi.org/10.3205/13gmds261 N1 - Meeting Abstract IS - DocAbstr. 329 PB - German Medical Science GMS Publishing House CY - Düsseldorf ER - TY - JOUR A1 - Neuschaefer-Rube, C. A1 - Lehmann, Thomas M. A1 - Palm, Christoph A1 - Bredno, J. A1 - Klajman, S. A1 - Spitzer, Klaus T1 - 3D-Visualisierung glottaler Abduktionsbewegungen JF - Aktuelle phoniatrisch-pädaudiologische Aspekte Y1 - 2001 SN - 3-922766-76-5 VL - 2001/2002 IS - 9 SP - 58 EP - 61 PB - Median ER - TY - JOUR A1 - Palm, Christoph A1 - Dehnhardt, Markus A1 - Vieten, Andrea A1 - Pietrzyk, Uwe A1 - Bauer, Andreas A1 - Zilles, Karl T1 - 3D rat brain tumors JF - Naunyn-Schmiedebergs Archives of Pharmacology Y1 - 2005 VL - 371 IS - R103 ER - TY - GEN A1 - Scheppach, Markus W. A1 - Mendel, Robert A1 - Probst, Andreas A1 - Meinikheim, Michael A1 - Palm, Christoph A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - Artificial Intelligence (AI) – assisted vessel and tissue recognition during third space endoscopy (Smart ESD) T2 - Zeitschrift für Gastroenterologie N2 - Clinical setting  Third space procedures such as endoscopic submucosal dissection (ESD) and peroral endoscopic myotomy (POEM) are complex minimally invasive techniques with an elevated risk for operator-dependent adverse events such as bleeding and perforation. This risk arises from accidental dissection into the muscle layer or through submucosal blood vessels as the submucosal cutting plane within the expanding resection site is not always apparent. Deep learning algorithms have shown considerable potential for the detection and characterization of gastrointestinal lesions. So-called AI – clinical decision support solutions (AI-CDSS) are commercially available for polyp detection during colonoscopy. Until now, these computer programs have concentrated on diagnostics whereas an AI-CDSS for interventional endoscopy has not yet been introduced. We aimed to develop an AI-CDSS („Smart ESD“) for real-time intra-procedural detection and delineation of blood vessels, tissue structures and endoscopic instruments during third-space endoscopic procedures. Characteristics of Smart ESD  An AI-CDSS was invented that delineates blood vessels, tissue structures and endoscopic instruments during third-space endoscopy in real-time. The output can be displayed by an overlay over the endoscopic image with different modes of visualization, such as a color-coded semitransparent area overlay, or border tracing (demonstration video). Hereby the optimal layer for dissection can be visualized, which is close above or directly at the muscle layer, depending on the applied technique (ESD or POEM). Furthermore, relevant blood vessels (thickness> 1mm) are delineated. Spatial proximity between the electrosurgical knife and a blood vessel triggers a warning signal. By this guidance system, inadvertent dissection through blood vessels could be averted. Technical specifications  A DeepLabv3+ neural network architecture with KSAC and a 101-layer ResNeSt backbone was used for the development of Smart ESD. It was trained and validated with 2565 annotated still images from 27 full length third-space endoscopic videos. The annotation classes were blood vessel, submucosal layer, muscle layer, electrosurgical knife and endoscopic instrument shaft. A test on a separate data set yielded an intersection over union (IoU) of 68%, a Dice Score of 80% and a pixel accuracy of 87%, demonstrating a high overlap between expert and AI segmentation. Further experiments on standardized video clips showed a mean vessel detection rate (VDR) of 85% with values of 92%, 70% and 95% for POEM, rectal ESD and esophageal ESD respectively. False positive measurements occurred 0.75 times per minute. 7 out of 9 vessels which caused intraprocedural bleeding were caught by the algorithm, as well as both vessels which required hemostasis via hemostatic forceps. Future perspectives  Smart ESD performed well for vessel and tissue detection and delineation on still images, as well as on video clips. During a live demonstration in the endoscopy suite, clinical applicability of the innovation was examined. The lag time for processing of the live endoscopic image was too short to be visually detectable for the interventionist. Even though the algorithm could not be applied during actual dissection by the interventionist, Smart ESD appeared readily deployable during visual assessment by ESD experts. Therefore, we plan to conduct a clinical trial in order to obtain CE-certification of the algorithm. This new technology may improve procedural safety and speed, as well as training of modern minimally invasive endoscopic resection techniques. KW - Artificial Intelligence KW - Medical Image Computing KW - Endoscopy KW - Bildgebendes Verfahren KW - Medizin KW - Künstliche Intelligenz KW - Endoskopie Y1 - 2022 U6 - https://doi.org/10.1055/s-0042-1755110 VL - 60 IS - 08 PB - Georg Thieme Verlag CY - Stuttgart ER - TY - GEN A1 - Roser, D. A. A1 - Meinikheim, Michael A1 - Mendel, Robert A1 - Palm, Christoph A1 - Probst, Andreas A1 - Muzalyova, A. A1 - Scheppach, Markus W. A1 - Nagl, S. A1 - Schnoy, Elisabeth A1 - Römmele, Christoph A1 - Schulz, D. A1 - Schlottmann, Jakob A1 - Prinz, Friederike A1 - Rauber, David A1 - Rückert, Tobias A1 - Matsumura, T. A1 - Fernandez-Esparrach, G. A1 - Parsa, N. A1 - Byrne, M. A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - Human-Computer Interaction: Impact of Artificial Intelligence on the diagnostic confidence of endoscopists assessing videos of Barrett’s esophagus T2 - Endoscopy N2 - Aims Human-computer interactions (HCI) may have a relevant impact on the performance of Artificial Intelligence (AI). Studies show that although endoscopists assessing Barrett’s esophagus (BE) with AI improve their performance significantly, they do not achieve the level of the stand-alone performance of AI. One aspect of HCI is the impact of AI on the degree of certainty and confidence displayed by the endoscopist. Indirectly, diagnostic confidence when using AI may be linked to trust and acceptance of AI. In a BE video study, we aimed to understand the impact of AI on the diagnostic confidence of endoscopists and the possible correlation with diagnostic performance. Methods 22 endoscopists from 12 centers with varying levels of BE experience reviewed ninety-six standardized endoscopy videos. Endoscopists were categorized into experts and non-experts and randomly assigned to assess the videos with and without AI. Participants were randomized in two arms: Arm A assessed videos first without AI and then with AI, while Arm B assessed videos in the opposite order. Evaluators were tasked with identifying BE-related neoplasia and rating their confidence with and without AI on a scale from 0 to 9. Results The utilization of AI in Arm A (without AI first, with AI second) significantly elevated confidence levels for experts and non-experts (7.1 to 8.0 and 6.1 to 6.6, respectively). Only non-experts benefitted from AI with a significant increase in accuracy (68.6% to 75.5%). Interestingly, while the confidence levels of experts without AI were higher than those of non-experts with AI, there was no significant difference in accuracy between these two groups (71.3% vs. 75.5%). In Arm B (with AI first, without AI second), experts and non-experts experienced a significant reduction in confidence (7.6 to 7.1 and 6.4 to 6.2, respectively), while maintaining consistent accuracy levels (71.8% to 71.8% and 67.5% to 67.1%, respectively). Conclusions AI significantly enhanced confidence levels for both expert and non-expert endoscopists. Endoscopists felt significantly more uncertain in their assessments without AI. Furthermore, experts with or without AI consistently displayed higher confidence levels than non-experts with AI, irrespective of comparable outcomes. These findings underscore the possible role of AI in improving diagnostic confidence during endoscopic assessment. Y1 - 2024 U6 - https://doi.org/10.1055/s-0044-1782859 SN - 1438-8812 VL - 56 IS - S 02 SP - 79 PB - Georg Thieme Verlag ER - TY - GEN A1 - Meinikheim, Michael A1 - Mendel, Robert A1 - Scheppach, Markus W. A1 - Probst, Andreas A1 - Prinz, Friederike A1 - Schwamberger, Tanja A1 - Schlottmann, Jakob A1 - Gölder, Stefan Karl A1 - Walter, Benjamin A1 - Steinbrück, Ingo A1 - Palm, Christoph A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - INFLUENCE OF AN ARTIFICIAL INTELLIGENCE (AI) BASED DECISION SUPPORT SYSTEM (DSS) ON THE DIAGNOSTIC PERFORMANCE OF NON-EXPERTS IN BARRETT´S ESOPHAGUS RELATED NEOPLASIA (BERN) T2 - Endoscopy N2 - Aims Barrett´s esophagus related neoplasia (BERN) is difficult to detect and characterize during endoscopy, even for expert endoscopists. We aimed to assess the add-on effect of an Artificial Intelligence (AI) algorithm (Barrett-Ampel) as a decision support system (DSS) for non-expert endoscopists in the evaluation of Barrett’s esophagus (BE) and BERN. Methods Twelve videos with multimodal imaging white light (WL), narrow-band imaging (NBI), texture and color enhanced imaging (TXI) of histologically confirmed BE and BERN were assessed by expert and non-expert endoscopists. For each video, endoscopists were asked to identify the area of BERN and decide on the biopsy spot. Videos were assessed by the AI algorithm and regions of BERN were highlighted in real-time by a transparent overlay. Finally, endoscopists were shown the AI videos and asked to either confirm or change their initial decision based on the AI support. Results Barrett-Ampel correctly identified all areas of BERN, irrespective of the imaging modality (WL, NBI, TXI), but misinterpreted two inflammatory lesions (Accuracy=75%). Expert endoscopists had a similar performance (Accuracy=70,8%), while non-experts had an accuracy of 58.3%. When AI was implemented as a DSS, non-expert endoscopists improved their diagnostic accuracy to 75%. Conclusions AI may have the potential to support non-expert endoscopists in the assessment of videos of BE and BERN. Limitations of this study include the low number of videos used. Randomized clinical trials in a real-life setting should be performed to confirm these results. KW - Artificial Intelligence KW - Barrett's Esophagus KW - Speiseröhrenkrankheit KW - Künstliche Intelligenz KW - Diagnose Y1 - 2022 U6 - https://doi.org/10.1055/s-00000012 VL - 54 IS - S 01 SP - S39 PB - Thieme ER - TY - JOUR A1 - Meinikheim, Michael A1 - Mendel, Robert A1 - Palm, Christoph A1 - Probst, Andreas A1 - Muzalyova, Anna A1 - Scheppach, Markus Wolfgang A1 - Nagl, Sandra A1 - Schnoy, Elisabeth A1 - Römmele, Christoph A1 - Schulz, Dominik Andreas Helmut Otto A1 - Schlottmann, Jakob A1 - Prinz, Friederike A1 - Rauber, David A1 - Rückert, Tobias A1 - Matsumura, Tomoaki A1 - Fernández-Esparrach, Glòria A1 - Parsa, Nasim A1 - Byrne, Michael F A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - Effect of AI on performance of endoscopists to detect Barrett neoplasia: A Randomized Tandem Trial JF - Endoscopy N2 - Background and study aims To evaluate the effect of an AI-based clinical decision support system (AI) on the performance and diagnostic confidence of endoscopists during the assessment of Barrett's esophagus (BE). Patients and Methods Ninety-six standardized endoscopy videos were assessed by 22 endoscopists from 12 different centers with varying degrees of BE experience. The assessment was randomized into two video sets: Group A (review first without AI and second with AI) and group B (review first with AI and second without AI). Endoscopists were required to evaluate each video for the presence of Barrett's esophagus-related neoplasia (BERN) and then decide on a spot for a targeted biopsy. After the second assessment, they were allowed to change their clinical decision and confidence level. Results AI had a standalone sensitivity, specificity, and accuracy of 92.2%, 68.9%, and 81.6%, respectively. Without AI, BE experts had an overall sensitivity, specificity, and accuracy of 83.3%, 58.1 and 71.5%, respectively. With AI, BE nonexperts showed a significant improvement in sensitivity and specificity when videos were assessed a second time with AI (sensitivity 69.7% (95% CI, 65.2% - 74.2%) to 78.0% (95% CI, 74.0% - 82.0%); specificity 67.3% (95% CI, 62.5% - 72.2%) to 72.7% (95 CI, 68.2% - 77.3%). In addition, the diagnostic confidence of BE nonexperts improved significantly with AI. Conclusion BE nonexperts benefitted significantly from the additional AI. BE experts and nonexperts remained below the standalone performance of AI, suggesting that there may be other factors influencing endoscopists to follow or discard AI advice. Y1 - 2024 U6 - https://doi.org/10.1055/a-2296-5696 SN - 0013-726X N1 - Accepted Manuscript PB - Georg Thieme Verlag ER - TY - JOUR A1 - Scheppach, Markus W. A1 - Mendel, Robert A1 - Probst, Andreas A1 - Meinikheim, Michael A1 - Palm, Christoph A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - ARTIFICIAL INTELLIGENCE (AI) – ASSISTED VESSEL AND TISSUE RECOGNITION IN THIRD-SPACE ENDOSCOPY JF - Endoscopy N2 - Aims Third-space endoscopy procedures such as endoscopic submucosal dissection (ESD) and peroral endoscopic myotomy (POEM) are complex interventions with elevated risk of operator-dependent adverse events, such as intra-procedural bleeding and perforation. We aimed to design an artificial intelligence clinical decision support solution (AI-CDSS, “Smart ESD”) for the detection and delineation of vessels, tissue structures, and instruments during third-space endoscopy procedures. Methods Twelve full-length third-space endoscopy videos were extracted from the Augsburg University Hospital database. 1686 frames were annotated for the following categories: Submucosal layer, blood vessels, electrosurgical knife and endoscopic instrument. A DeepLabv3+neural network with a 101-layer ResNet backbone was trained and validated internally. Finally, the ability of the AI system to detect visible vessels during ESD and POEM was determined on 24 separate video clips of 7 to 46 seconds duration and showing 33 predefined vessels. These video clips were also assessed by an expert in third-space endoscopy. Results Smart ESD showed a vessel detection rate (VDR) of 93.94%, while an average of 1.87 false positive signals were recorded per minute. VDR of the expert endoscopist was 90.1% with no false positive findings. On the internal validation data set using still images, the AI system demonstrated an Intersection over Union (IoU), mean Dice score and pixel accuracy of 63.47%, 76.18% and 86.61%, respectively. Conclusions This is the first AI-CDSS aiming to mitigate operator-dependent limitations during third-space endoscopy. Further clinical trials are underway to better understand the role of AI in such procedures. KW - Artificial Intelligence KW - Third-Space Endoscopy KW - Smart ESD Y1 - 2022 U6 - https://doi.org/10.1055/s-0042-1745037 VL - 54 IS - S01 SP - S175 PB - Thieme ER - TY - CHAP A1 - Rueckert, Tobias A1 - Rieder, Maximilian A1 - Feussner, Hubertus A1 - Wilhelm, Dirk A1 - Rueckert, Daniel A1 - Palm, Christoph ED - Maier, Andreas ED - Deserno, Thomas M. ED - Handels, Heinz ED - Maier-Hein, Klaus ED - Palm, Christoph ED - Tolxdorff, Thomas T1 - Smoke Classification in Laparoscopic Cholecystectomy Videos Incorporating Spatio-temporal Information T2 - Bildverarbeitung für die Medizin 2024: Proceedings, German Workshop on Medical Image Computing, March 10-12, 2024, Erlangen N2 - Heavy smoke development represents an important challenge for operating physicians during laparoscopic procedures and can potentially affect the success of an intervention due to reduced visibility and orientation. Reliable and accurate recognition of smoke is therefore a prerequisite for the use of downstream systems such as automated smoke evacuation systems. Current approaches distinguish between non-smoked and smoked frames but often ignore the temporal context inherent in endoscopic video data. In this work, we therefore present a method that utilizes the pixel-wise displacement from randomly sampled images to the preceding frames determined using the optical flow algorithm by providing the transformed magnitude of the displacement as an additional input to the network. Further, we incorporate the temporal context at evaluation time by applying an exponential moving average on the estimated class probabilities of the model output to obtain more stable and robust results over time. We evaluate our method on two convolutional-based and one state-of-the-art transformer architecture and show improvements in the classification results over a baseline approach, regardless of the network used. Y1 - 2024 U6 - https://doi.org/10.1007/978-3-658-44037-4_78 SP - 298 EP - 303 PB - Springeer CY - Wiesbaden ER - TY - GEN A1 - Römmele, Christoph A1 - Mendel, Robert A1 - Rauber, David A1 - Rückert, Tobias A1 - Byrne, Michael F. A1 - Palm, Christoph A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - Endoscopic Diagnosis of Eosinophilic Esophagitis Using a deep Learning Algorithm T2 - Endoscopy N2 - Aims Eosinophilic esophagitis (EoE) is easily missed during endoscopy, either because physicians are not familiar with its endoscopic features or the morphologic changes are too subtle. In this preliminary paper, we present the first attempt to detect EoE in endoscopic white light (WL) images using a deep learning network (EoE-AI). Methods 401 WL images of eosinophilic esophagitis and 871 WL images of normal esophageal mucosa were evaluated. All images were assessed for the Endoscopic Reference score (EREFS) (edema, rings, exudates, furrows, strictures). Images with strictures were excluded. EoE was defined as the presence of at least 15 eosinophils per high power field on biopsy. A convolutional neural network based on the ResNet architecture with several five-fold cross-validation runs was used. Adding auxiliary EREFS-classification branches to the neural network allowed the inclusion of the scores as optimization criteria during training. EoE-AI was evaluated for sensitivity, specificity, and F1-score. In addition, two human endoscopists evaluated the images. Results EoE-AI showed a mean sensitivity, specificity, and F1 of 0.759, 0.976, and 0.834 respectively, averaged over the five distinct cross-validation runs. With the EREFS-augmented architecture, a mean sensitivity, specificity, and F1-score of 0.848, 0.945, and 0.861 could be demonstrated respectively. In comparison, the two human endoscopists had an average sensitivity, specificity, and F1-score of 0.718, 0.958, and 0.793. Conclusions To the best of our knowledge, this is the first application of deep learning to endoscopic images of EoE which were also assessed after augmentation with the EREFS-score. The next step is the evaluation of EoE-AI using an external dataset. We then plan to assess the EoE-AI tool on endoscopic videos, and also in real-time. This preliminary work is encouraging regarding the ability for AI to enhance physician detection of EoE, and potentially to do a true “optical biopsy” but more work is needed. KW - Eosinophilic Esophagitis KW - Endoscopy KW - Deep Learning Y1 - 2021 U6 - https://doi.org/10.1055/s-0041-1724274 VL - 53 IS - S 01 PB - Georg Thieme Verlag CY - Stuttgart ER - TY - GEN A1 - Scheppach, Markus W. A1 - Rauber, David A1 - Mendel, Robert A1 - Palm, Christoph A1 - Byrne, Michael F. A1 - Messmann, Helmut A1 - Ebigbo, Alanna T1 - Detection Of Celiac Disease Using A Deep Learning Algorithm T2 - Endoscopy N2 - Aims Celiac disease (CD) is a complex condition caused by an autoimmune reaction to ingested gluten. Due to its polymorphic manifestation and subtle endoscopic presentation, the diagnosis is difficult and thus the disorder is underreported. We aimed to use deep learning to identify celiac disease on endoscopic images of the small bowel. Methods Patients with small intestinal histology compatible with CD (MARSH classification I-III) were extracted retrospectively from the database of Augsburg University hospital. They were compared to patients with no clinical signs of CD and histologically normal small intestinal mucosa. In a first step MARSH III and normal small intestinal mucosa were differentiated with the help of a deep learning algorithm. For this, the endoscopic white light images were divided into five equal-sized subsets. We avoided splitting the images of one patient into several subsets. A ResNet-50 model was trained with the images from four subsets and then validated with the remaining subset. This process was repeated for each subset, such that each subset was validated once. Sensitivity, specificity, and harmonic mean (F1) of the algorithm were determined. Results The algorithm showed values of 0.83, 0.88, and 0.84 for sensitivity, specificity, and F1, respectively. Further data showing a comparison between the detection rate of the AI model and that of experienced endoscopists will be available at the time of the upcoming conference. Conclusions We present the first clinical report on the use of a deep learning algorithm for the detection of celiac disease using endoscopic images. Further evaluation on an external data set, as well as in the detection of CD in real-time, will follow. However, this work at least suggests that AI can assist endoscopists in the endoscopic diagnosis of CD, and ultimately may be able to do a true optical biopsy in live-time. KW - Celiac Disease KW - Deep Learning Y1 - 2021 U6 - https://doi.org/10.1055/s-0041-1724970 N1 - Digital poster exhibition VL - 53 IS - S 01 PB - Georg Thieme Verlag CY - Stuttgart ER - TY - JOUR A1 - Ebigbo, Alanna A1 - Mendel, Robert A1 - Probst, Andreas A1 - Meinikheim, Michael A1 - Byrne, Michael F. A1 - Messmann, Helmut A1 - Palm, Christoph T1 - Multimodal imaging for detection and segmentation of Barrett’s esophagus-related neoplasia using artificial intelligence JF - Endoscopy N2 - The early diagnosis of cancer in Barrett’s esophagus is crucial for improving the prognosis. However, identifying Barrett’s esophagus-related neoplasia (BERN) is challenging, even for experts [1]. Four-quadrant biopsies may improve the detection of neoplasia, but they can be associated with sampling errors. The application of artificial intelligence (AI) to the assessment of Barrett’s esophagus could improve the diagnosis of BERN, and this has been demonstrated in both preclinical and clinical studies [2] [3]. In this video demonstration, we show the accurate detection and delineation of BERN in two patients ([Video 1]). In part 1, the AI system detects a mucosal cancer about 20 mm in size and accurately delineates the lesion in both white-light and narrow-band imaging. In part 2, a small island of BERN with high-grade dysplasia is detected and delineated in white-light, narrow-band, and texture and color enhancement imaging. The video shows the results using a transparent overlay of the mucosal cancer in real time as well as a full segmentation preview. Additionally, the optical flow allows for the assessment of endoscope movement, something which is inversely related to the reliability of the AI prediction. We demonstrate that multimodal imaging can be applied to the AI-assisted detection and segmentation of even small focal lesions in real time. KW - Video KW - Artificial Intelligence KW - Multimodal Imaging Y1 - 2022 U6 - https://doi.org/10.1055/a-1704-7885 VL - 54 IS - 10 PB - Georg Thieme Verlag CY - Stuttgart ET - E-Video ER - TY - INPR A1 - Mendel, Robert A1 - Rueckert, Tobias A1 - Wilhelm, Dirk A1 - Rueckert, Daniel A1 - Palm, Christoph T1 - Motion-Corrected Moving Average: Including Post-Hoc Temporal Information for Improved Video Segmentation N2 - Real-time computational speed and a high degree of precision are requirements for computer-assisted interventions. Applying a segmentation network to a medical video processing task can introduce significant inter-frame prediction noise. Existing approaches can reduce inconsistencies by including temporal information but often impose requirements on the architecture or dataset. This paper proposes a method to include temporal information in any segmentation model and, thus, a technique to improve video segmentation performance without alterations during training or additional labeling. With Motion-Corrected Moving Average, we refine the exponential moving average between the current and previous predictions. Using optical flow to estimate the movement between consecutive frames, we can shift the prior term in the moving-average calculation to align with the geometry of the current frame. The optical flow calculation does not require the output of the model and can therefore be performed in parallel, leading to no significant runtime penalty for our approach. We evaluate our approach on two publicly available segmentation datasets and two proprietary endoscopic datasets and show improvements over a baseline approach. KW - Deep Learning KW - Video KW - Segmentation Y1 - 2024 U6 - https://doi.org/10.48550/arXiv.2403.03120 ER - TY - CHAP A1 - Mendel, Robert A1 - Ebigbo, Alanna A1 - Probst, Andreas A1 - Messmann, Helmut A1 - Palm, Christoph T1 - Barrett’s Esophagus Analysis Using Convolutional Neural Networks T2 - Bildverarbeitung für die Medizin 2017; Algorithmen - Systeme - Anwendungen. Proceedings des Workshops vom 12. bis 14. März 2017 in Heidelberg N2 - We propose an automatic approach for early detection of adenocarcinoma in the esophagus. High-definition endoscopic images (50 cancer, 50 Barrett) are partitioned into a dataset containing approximately equal amounts of patches showing cancerous and non-cancerous regions. A deep convolutional neural network is adapted to the data using a transfer learning approach. The final classification of an image is determined by at least one patch, for which the probability being a cancer patch exceeds a given threshold. The model was evaluated with leave one patient out cross-validation. With sensitivity and specificity of 0.94 and 0.88, respectively, our findings improve recently published results on the same image data base considerably. Furthermore, the visualization of the class probabilities of each individual patch indicates, that our approach might be extensible to the segmentation domain. KW - Speiseröhrenkrebs KW - Diagnose KW - Maschinelles Lernen KW - Bilderkennung KW - Automatische Klassifikation Y1 - 2017 U6 - https://doi.org/10.1007/978-3-662-54345-0_23 SP - 80 EP - 85 PB - Springer CY - Berlin ER -