TY - CHAP A1 - Weiherer, Maximilian A1 - Zorn, Martin A1 - Wittenberg, Thomas A1 - Palm, Christoph ED - Tolxdorff, Thomas ED - Deserno, Thomas M. ED - Handels, Heinz ED - Maier, Andreas ED - Maier-Hein, Klaus H. ED - Palm, Christoph T1 - Retrospective Color Shading Correction for Endoscopic Images T2 - Bildverarbeitung für die Medizin 2020. Algorithmen - Systeme - Anwendungen. Proceedings des Workshops vom 15. bis 17. März 2020 in Berlin N2 - In this paper, we address the problem of retrospective color shading correction. An extension of the established gray-level shading correction algorithm based on signal envelope (SE) estimation to color images is developed using principal color components. Compared to the probably most general shading correction algorithm based on entropy minimization, SE estimation does not need any computationally expensive optimization and thus can be implemented more effciently. We tested our new shading correction scheme on artificial as well as real endoscopic images and observed promising results. Additionally, an indepth analysis of the stop criterion used in the SE estimation algorithm is provided leading to the conclusion that a fixed, user-defined threshold is generally not feasible. Thus, we present new ideas how to develop a non-parametric version of the SE estimation algorithm using entropy. KW - Endoskopie KW - Bildgebendes Verfahren KW - Farbenraum KW - Graustufe Y1 - 2020 SN - 978-3-658-29266-9 U6 - https://doi.org/10.1007/978-3-658-29267-6 SP - 14 EP - 19 PB - Springer Vieweg CY - Wiesbaden ER - TY - JOUR A1 - Ott, Tankred A1 - Palm, Christoph A1 - Vogt, Robert A1 - Oberprieler, Christoph T1 - GinJinn: An object-detection pipeline for automated feature extraction from herbarium specimens JF - Applications in Plant Sciences N2 - PREMISE: The generation of morphological data in evolutionary, taxonomic, and ecological studies of plants using herbarium material has traditionally been a labor-intensive task. Recent progress in machine learning using deep artificial neural networks (deep learning) for image classification and object detection has facilitated the establishment of a pipeline for the automatic recognition and extraction of relevant structures in images of herbarium specimens. METHODS AND RESULTS: We implemented an extendable pipeline based on state-of-the-art deep-learning object-detection methods to collect leaf images from herbarium specimens of two species of the genus Leucanthemum. Using 183 specimens as the training data set, our pipeline extracted one or more intact leaves in 95% of the 61 test images. CONCLUSIONS: We establish GinJinn as a deep-learning object-detection tool for the automatic recognition and extraction of individual leaves or other structures from herbarium specimens. Our pipeline offers greater flexibility and a lower entrance barrier than previous image-processing approaches based on hand-crafted features. KW - Deep Learning KW - herbarium specimens KW - object detection KW - visual recognition KW - Deep Learning KW - Objekterkennung KW - Maschinelles Sehen KW - Pflanzen Y1 - 2020 U6 - https://doi.org/10.1002/aps3.11351 SN - 2168-0450 VL - 8 IS - 6 SP - e11351 PB - Wiley, Botanical Society of America ER - TY - CHAP A1 - Mendel, Robert A1 - De Souza Jr., Luis Antonio A1 - Rauber, David A1 - Papa, João Paulo A1 - Palm, Christoph T1 - Semi-supervised Segmentation Based on Error-Correcting Supervision T2 - Computer vision - ECCV 2020: 16th European conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XXIX N2 - Pixel-level classification is an essential part of computer vision. For learning from labeled data, many powerful deep learning models have been developed recently. In this work, we augment such supervised segmentation models by allowing them to learn from unlabeled data. Our semi-supervised approach, termed Error-Correcting Supervision, leverages a collaborative strategy. Apart from the supervised training on the labeled data, the segmentation network is judged by an additional network. The secondary correction network learns on the labeled data to optimally spot correct predictions, as well as to amend incorrect ones. As auxiliary regularization term, the corrector directly influences the supervised training of the segmentation network. On unlabeled data, the output of the correction network is essential to create a proxy for the unknown truth. The corrector’s output is combined with the segmentation network’s prediction to form the new target. We propose a loss function that incorporates both the pseudo-labels as well as the predictive certainty of the correction network. Our approach can easily be added to supervised segmentation models. We show consistent improvements over a supervised baseline on experiments on both the Pascal VOC 2012 and the Cityscapes datasets with varying amounts of labeled data. KW - Semi-Supervised Learning KW - Machine Learning Y1 - 2020 SN - 978-3-030-58525-9 U6 - https://doi.org/10.1007/978-3-030-58526-6_9 SP - 141 EP - 157 PB - Springer CY - Cham ER - TY - JOUR A1 - Maier, Johannes A1 - Weiherer, Maximilian A1 - Huber, Michaela A1 - Palm, Christoph T1 - Optically tracked and 3D printed haptic phantom hand for surgical training system JF - Quantitative Imaging in Medicine and Surgery N2 - Background: For surgical fixation of bone fractures of the human hand, so-called Kirschner-wires (K-wires) are drilled through bone fragments. Due to the minimally invasive drilling procedures without a view of risk structures like vessels and nerves, a thorough training of young surgeons is necessary. For the development of a virtual reality (VR) based training system, a three-dimensional (3D) printed phantom hand is required. To ensure an intuitive operation, this phantom hand has to be realistic in both, its position relative to the driller as well as in its haptic features. The softest 3D printing material available on the market, however, is too hard to imitate human soft tissue. Therefore, a support-material (SUP) filled metamaterial is used to soften the raw material. Realistic haptic features are important to palpate protrusions of the bone to determine the drilling starting point and angle. An optical real-time tracking is used to transfer position and rotation to the training system. Methods: A metamaterial already developed in previous work is further improved by use of a new unit cell. Thus, the amount of SUP within the volume can be increased and the tissue is softened further. In addition, the human anatomy is transferred to the entire hand model. A subcutaneous fat layer and penetration of air through pores into the volume simulate shiftability of skin layers. For optical tracking, a rotationally symmetrical marker attached to the phantom hand with corresponding reference marker is developed. In order to ensure trouble-free position transmission, various types of marker point applications are tested. Results: Several cuboid and forearm sample prints lead to a final 30 centimeter long hand model. The whole haptic phantom could be printed faultless within about 17 hours. The metamaterial consisting of the new unit cell results in an increased SUP share of 4.32%. Validated by an expert surgeon study, this allows in combination with a displacement of the uppermost skin layer a good palpability of the bones. Tracking of the hand marker in dodecahedron design works trouble-free in conjunction with a reference marker attached to the worktop of the training system. Conclusions: In this work, an optically tracked and haptically correct phantom hand was developed using dual-material 3D printing, which can be easily integrated into a surgical training system. KW - Handchirurgie KW - 3D-Druck KW - Lernprogramm KW - Zielverfolgung KW - HaptiVisT KW - Dual-material 3D printing KW - hand surgery training KW - metamaterial KW - tissue imitating phantom hand Y1 - 2020 U6 - https://doi.org/10.21037/qims.2019.12.03 N1 - Corresponding author: Christoph Palm VL - 10 IS - 02 SP - 340 EP - 455 PB - AME Publishing Company CY - Hong Kong, China ER - TY - JOUR A1 - Hartmann, Robin A1 - Weiherer, Maximilian A1 - Schiltz, Daniel A1 - Seitz, Stephan A1 - Lotter, Luisa A1 - Anker, Alexandra A1 - Palm, Christoph A1 - Prantl, Lukas A1 - Brébant, Vanessa T1 - A Novel Method of Outcome Assessment in Breast Reconstruction Surgery: Comparison of Autologous and Alloplastic Techniques Using Three-Dimensional Surface Imaging JF - Aesthetic Plastic Surgery N2 - Background Breast reconstruction is an important coping tool for patients undergoing a mastectomy. There are numerous surgical techniques in breast reconstruction surgery (BRS). Regardless of the technique used, creating a symmetric outcome is crucial for patients and plastic surgeons. Three-dimensional surface imaging enables surgeons and patients to assess the outcome’s symmetry in BRS. To discriminate between autologous and alloplastic techniques, we analyzed both techniques using objective optical computerized symmetry analysis. Software was developed that enables clinicians to assess optical breast symmetry using three-dimensional surface imaging. Methods Twenty-seven patients who had undergone autologous (n = 12) or alloplastic (n = 15) BRS received three-dimensional surface imaging. Anthropomorphic data were collected digitally using semiautomatic measurements and automatic measurements. Automatic measurements were taken using the newly developed software. To quantify symmetry, a Symmetry Index is proposed. Results Statistical analysis revealed that there is no dif- ference in the outcome symmetry between the two groups (t test for independent samples; p = 0.48, two-tailed). Conclusion This study’s findings provide a foundation for qualitative symmetry assessment in BRS using automatized digital anthropometry. In the present trial, no difference in the outcomes’ optical symmetry was detected between autologous and alloplastic approaches. KW - Breast reconstruction KW - Breast symmetry KW - Digital anthropometry KW - Mammoplastik KW - Dreidimensionale Bildverarbeitung KW - Autogene Transplantation KW - Alloplastik Y1 - 2020 U6 - https://doi.org/10.1007/s00266-020-01749-4 VL - 44 SP - 1980 EP - 1987 PB - Springer CY - Heidelberg ER - TY - GEN A1 - Ebigbo, Alanna A1 - Mendel, Robert A1 - Tziatzios, Georgios A1 - Probst, Andreas A1 - Palm, Christoph A1 - Messmann, Helmut T1 - Real-Time Diagnosis of an Early Barrett's Carcinoma using Artificial Intelligence (AI) - Video Case Demonstration T2 - Endoscopy N2 - Introduction We present a clinical case showing the real-time detection, characterization and delineation of an early Barrett’s cancer using AI. Patients and methods A 70-year old patient with a long-segment Barrett’s esophagus (C5M7) was assessed with an AI algorithm. Results The AI system detected a 10 mm focal lesion and AI characterization predicted cancer with a probability of >90%. After ESD resection, histopathology showed mucosal adenocarcinoma (T1a (m), R0) confirming AI diagnosis. Conclusion We demonstrate the real-time AI detection, characterization and delineation of a small and early mucosal Barrett’s cancer. KW - Artificial Intelligence KW - Barrett's Carcinoma KW - Speiseröhrenkrebs KW - Künstliche Intelligenz KW - Diagnose Y1 - 2020 U6 - https://doi.org/10.1055/s-0040-1704075 VL - 52 IS - S 01 PB - Thieme ER - TY - JOUR A1 - Ebigbo, Alanna A1 - Mendel, Robert A1 - Probst, Andreas A1 - Manzeneder, Johannes A1 - Prinz, Friederike A1 - De Souza Jr., Luis Antonio A1 - Papa, João Paulo A1 - Palm, Christoph A1 - Messmann, Helmut T1 - Real-time use of artificial intelligence in the evaluation of cancer in Barrett’s oesophagus JF - Gut N2 - Based on previous work by our group with manual annotation of visible Barrett oesophagus (BE) cancer images, a real-time deep learning artificial intelligence (AI) system was developed. While an expert endoscopist conducts the endoscopic assessment of BE, our AI system captures random images from the real-time camera livestream and provides a global prediction (classification), as well as a dense prediction (segmentation) differentiating accurately between normal BE and early oesophageal adenocarcinoma (EAC). The AI system showed an accuracy of 89.9% on 14 cases with neoplastic BE. KW - Speiseröhrenkrankheit KW - Diagnose KW - Maschinelles Lernen KW - Barrett's esophagus KW - Deep learning KW - real-time Y1 - 2020 U6 - https://doi.org/10.1136/gutjnl-2019-319460 VL - 69 IS - 4 SP - 615 EP - 616 PB - BMJ CY - London ER - TY - JOUR A1 - De Souza Jr., Luis Antonio A1 - Passos, Leandro A. A1 - Mendel, Robert A1 - Ebigbo, Alanna A1 - Probst, Andreas A1 - Messmann, Helmut A1 - Palm, Christoph A1 - Papa, João Paulo T1 - Assisting Barrett's esophagus identification using endoscopic data augmentation based on Generative Adversarial Networks JF - Computers in Biology and Medicine N2 - Barrett's esophagus figured a swift rise in the number of cases in the past years. Although traditional diagnosis methods offered a vital role in early-stage treatment, they are generally time- and resource-consuming. In this context, computer-aided approaches for automatic diagnosis emerged in the literature since early detection is intrinsically related to remission probabilities. However, they still suffer from drawbacks because of the lack of available data for machine learning purposes, thus implying reduced recognition rates. This work introduces Generative Adversarial Networks to generate high-quality endoscopic images, thereby identifying Barrett's esophagus and adenocarcinoma more precisely. Further, Convolution Neural Networks are used for feature extraction and classification purposes. The proposed approach is validated over two datasets of endoscopic images, with the experiments conducted over the full and patch-split images. The application of Deep Convolutional Generative Adversarial Networks for the data augmentation step and LeNet-5 and AlexNet for the classification step allowed us to validate the proposed methodology over an extensive set of datasets (based on original and augmented sets), reaching results of 90% of accuracy for the patch-based approach and 85% for the image-based approach. Both results are based on augmented datasets and are statistically different from the ones obtained in the original datasets of the same kind. Moreover, the impact of data augmentation was evaluated in the context of image description and classification, and the results obtained using synthetic images outperformed the ones over the original datasets, as well as other recent approaches from the literature. Such results suggest promising insights related to the importance of proper data for the accurate classification concerning computer-assisted Barrett's esophagus and adenocarcinoma detection. KW - Maschinelles Lernen KW - Barrett's esophagus KW - Machine learning KW - Adenocarcinoma KW - Generative adversarial networks KW - Neuronales Netz KW - Adenocarcinom KW - Speiseröhrenkrebs KW - Diagnose Y1 - 2020 U6 - https://doi.org/10.1016/j.compbiomed.2020.104029 VL - 126 IS - November PB - Elsevier ER - TY - CHAP A1 - Chang, Ching-Sheng A1 - Lin, Jin-Fa A1 - Lee, Ming-Ching A1 - Palm, Christoph ED - Tolxdorff, Thomas ED - Deserno, Thomas M. ED - Handels, Heinz ED - Maier, Andreas ED - Maier-Hein, Klaus H. ED - Palm, Christoph T1 - Semantic Lung Segmentation Using Convolutional Neural Networks T2 - Bildverarbeitung für die Medizin 2020. Algorithmen - Systeme - Anwendungen. Proceedings des Workshops vom 15. bis 17. März 2020 in Berlin N2 - Chest X-Ray (CXR) images as part of a non-invasive diagnosis method are commonly used in today’s medical workflow. In traditional methods, physicians usually use their experience to interpret CXR images, however, there is a large interobserver variance. Computer vision may be used as a standard for assisted diagnosis. In this study, we applied an encoder-decoder neural network architecture for automatic lung region detection. We compared a three-class approach (left lung, right lung, background) and a two-class approach (lung, background). The differentiation of left and right lungs as direct result of a semantic segmentation on basis of neural nets rather than post-processing a lung-background segmentation is done here for the first time. Our evaluation was done on the NIH Chest X-ray dataset, from which 1736 images were extracted and manually annotated. We achieved 94:9% mIoU and 92% mIoU as segmentation quality measures for the two-class-model and the three-class-model, respectively. This result is very promising for the segmentation of lung regions having the simultaneous classification of left and right lung in mind. KW - Neuronales Netz KW - Segmentierung KW - Brustkorb KW - Deep Learning KW - Encoder-Decoder Network KW - Chest X-Ray Y1 - 2020 SN - 978-3-658-29266-9 U6 - https://doi.org/10.1007/978-3-658-29267-6_17 SP - 75 EP - 80 PB - Springer Vieweg CY - Wiesbaden ER - TY - INPR A1 - Allan, Max A1 - Kondo, Satoshi A1 - Bodenstedt, Sebastian A1 - Leger, Stefan A1 - Kadkhodamohammadi, Rahim A1 - Luengo, Imanol A1 - Fuentes, Felix A1 - Flouty, Evangello A1 - Mohammed, Ahmed A1 - Pedersen, Marius A1 - Kori, Avinash A1 - Alex, Varghese A1 - Krishnamurthi, Ganapathy A1 - Rauber, David A1 - Mendel, Robert A1 - Palm, Christoph A1 - Bano, Sophia A1 - Saibro, Guinther A1 - Shih, Chi-Sheng A1 - Chiang, Hsun-An A1 - Zhuang, Juntang A1 - Yang, Junlin A1 - Iglovikov, Vladimir A1 - Dobrenkii, Anton A1 - Reddiboina, Madhu A1 - Reddy, Anubhav A1 - Liu, Xingtong A1 - Gao, Cong A1 - Unberath, Mathias A1 - Kim, Myeonghyeon A1 - Kim, Chanho A1 - Kim, Chaewon A1 - Kim, Hyejin A1 - Lee, Gyeongmin A1 - Ullah, Ihsan A1 - Luna, Miguel A1 - Park, Sang Hyun A1 - Azizian, Mahdi A1 - Stoyanov, Danail A1 - Maier-Hein, Lena A1 - Speidel, Stefanie T1 - 2018 Robotic Scene Segmentation Challenge N2 - In 2015 we began a sub-challenge at the EndoVis workshop at MICCAI in Munich using endoscope images of exvivo tissue with automatically generated annotations from robot forward kinematics and instrument CAD models. However, the limited background variation and simple motion rendered the dataset uninformative in learning about which techniques would be suitable for segmentation in real surgery. In 2017, at the same workshop in Quebec we introduced the robotic instrument segmentation dataset with 10 teams participating in the challenge to perform binary, articulating parts and type segmentation of da Vinci instruments. This challenge included realistic instrument motion and more complex porcine tissue as background and was widely addressed with modfications on U-Nets and other popular CNN architectures [1]. In 2018 we added to the complexity by introducing a set of anatomical objects and medical devices to the segmented classes. To avoid over-complicating the challenge, we continued with porcine data which is dramatically simpler than human tissue due to the lack of fatty tissue occluding many organs. KW - Minimally invasive surgery KW - Robotic KW - Minimal-invasive Chirurgie KW - Robotik Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:898-opus4-50049 UR - https://arxiv.org/abs/2001.11190 ER -