TY - JOUR A1 - Pimentel, Pedro A1 - Szengel, Angelika A1 - Ehlke, Moritz A1 - Lamecker, Hans A1 - Zachow, Stefan A1 - Estacio, Laura A1 - Doenitz, Christian A1 - Ramm, Heiko ED - Li, Jianning ED - Egger, Jan T1 - Automated Virtual Reconstruction of Large Skull Defects using Statistical Shape Models and Generative Adversarial Networks BT - First Challenge, AutoImplant 2020, Held in Conjunction with MICCAI 2020, Lima, Peru, October 8, 2020, Proceedings JF - Towards the Automatization of Cranial Implant Design in Cranioplasty N2 - We present an automated method for extrapolating missing regions in label data of the skull in an anatomically plausible manner. The ultimate goal is to design patient-speci� c cranial implants for correcting large, arbitrarily shaped defects of the skull that can, for example, result from trauma of the head. Our approach utilizes a 3D statistical shape model (SSM) of the skull and a 2D generative adversarial network (GAN) that is trained in an unsupervised fashion from samples of healthy patients alone. By � tting the SSM to given input labels containing the skull defect, a First approximation of the healthy state of the patient is obtained. The GAN is then applied to further correct and smooth the output of the SSM in an anatomically plausible manner. Finally, the defect region is extracted using morphological operations and subtraction between the extrapolated healthy state of the patient and the defective input labels. The method is trained and evaluated based on data from the MICCAI 2020 AutoImplant challenge. It produces state-of-the art results on regularly shaped cut-outs that were present in the training and testing data of the challenge. Furthermore, due to unsupervised nature of the approach, the method generalizes well to previously unseen defects of varying shapes that were only present in the hidden test dataset. Y1 - 2020 U6 - https://doi.org/10.1007/978-3-030-64327-0_3 N1 - Best Paper Award VL - 12439 SP - 16 EP - 27 PB - Springer International Publishing ET - 1 ER - TY - JOUR A1 - Oeltze-Jaffra, Steffen A1 - Meuschke, Monique A1 - Neugebauer, Mathias A1 - Saalfeld, Sylvia A1 - Lawonn, Kai A1 - Janiga, Gabor A1 - Hege, Hans-Christian A1 - Zachow, Stefan A1 - Preim, Bernhard T1 - Generation and Visual Exploration of Medical Flow Data: Survey, Research Trends, and Future Challenges JF - Computer Graphics Forum N2 - Simulations and measurements of blood and air flow inside the human circulatory and respiratory system play an increasingly important role in personalized medicine for prevention, diagnosis, and treatment of diseases. This survey focuses on three main application areas. (1) Computational Fluid Dynamics (CFD) simulations of blood flow in cerebral aneurysms assist in predicting the outcome of this pathologic process and of therapeutic interventions. (2) CFD simulations of nasal airflow allow for investigating the effects of obstructions and deformities and provide therapy decision support. (3) 4D Phase-Contrast (4D PC) Magnetic Resonance Imaging (MRI) of aortic hemodynamics supports the diagnosis of various vascular and valve pathologies as well as their treatment. An investigation of the complex and often dynamic simulation and measurement data requires the coupling of sophisticated visualization, interaction, and data analysis techniques. In this paper, we survey the large body of work that has been conducted within this realm. We extend previous surveys by incorporating nasal airflow, addressing the joint investigation of blood flow and vessel wall properties, and providing a more fine-granular taxonomy of the existing techniques. From the survey, we extract major research trends and identify open problems and future challenges. The survey is intended for researchers interested in medical flow but also more general, in the combined visualization of physiology and anatomy, the extraction of features from flow field data and feature-based visualization, the visual comparison of different simulation results, and the interactive visual analysis of the flow field and derived characteristics. Y1 - 2019 U6 - https://doi.org/10.1111/cgf.13394 VL - 38 IS - 1 SP - 87 EP - 125 PB - Wiley ER - TY - GEN A1 - Ambellan, Felix A1 - Lamecker, Hans A1 - von Tycowicz, Christoph A1 - Zachow, Stefan T1 - Statistical Shape Models - Understanding and Mastering Variation in Anatomy N2 - In our chapter we are describing how to reconstruct three-dimensional anatomy from medical image data and how to build Statistical 3D Shape Models out of many such reconstructions yielding a new kind of anatomy that not only allows quantitative analysis of anatomical variation but also a visual exploration and educational visualization. Future digital anatomy atlases will not only show a static (average) anatomy but also its normal or pathological variation in three or even four dimensions, hence, illustrating growth and/or disease progression. Statistical Shape Models (SSMs) are geometric models that describe a collection of semantically similar objects in a very compact way. SSMs represent an average shape of many three-dimensional objects as well as their variation in shape. The creation of SSMs requires a correspondence mapping, which can be achieved e.g. by parameterization with a respective sampling. If a corresponding parameterization over all shapes can be established, variation between individual shape characteristics can be mathematically investigated. We will explain what Statistical Shape Models are and how they are constructed. Extensions of Statistical Shape Models will be motivated for articulated coupled structures. In addition to shape also the appearance of objects will be integrated into the concept. Appearance is a visual feature independent of shape that depends on observers or imaging techniques. Typical appearances are for instance the color and intensity of a visual surface of an object under particular lighting conditions, or measurements of material properties with computed tomography (CT) or magnetic resonance imaging (MRI). A combination of (articulated) statistical shape models with statistical models of appearance lead to articulated Statistical Shape and Appearance Models (a-SSAMs).After giving various examples of SSMs for human organs, skeletal structures, faces, and bodies, we will shortly describe clinical applications where such models have been successfully employed. Statistical Shape Models are the foundation for the analysis of anatomical cohort data, where characteristic shapes are correlated to demographic or epidemiologic data. SSMs consisting of several thousands of objects offer, in combination with statistical methods ormachine learning techniques, the possibility to identify characteristic clusters, thus being the foundation for advanced diagnostic disease scoring. T3 - ZIB-Report - 19-13 Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-72699 SN - 1438-0064 ER - TY - CHAP A1 - Ambellan, Felix A1 - Zachow, Stefan A1 - von Tycowicz, Christoph T1 - A Surface-Theoretic Approach for Statistical Shape Modeling T2 - Proc. Medical Image Computing and Computer Assisted Intervention (MICCAI), Part IV N2 - We present a novel approach for nonlinear statistical shape modeling that is invariant under Euclidean motion and thus alignment-free. By analyzing metric distortion and curvature of shapes as elements of Lie groups in a consistent Riemannian setting, we construct a framework that reliably handles large deformations. Due to the explicit character of Lie group operations, our non-Euclidean method is very efficient allowing for fast and numerically robust processing. This facilitates Riemannian analysis of large shape populations accessible through longitudinal and multi-site imaging studies providing increased statistical power. We evaluate the performance of our model w.r.t. shape-based classification of pathological malformations of the human knee and show that it outperforms the standard Euclidean as well as a recent nonlinear approach especially in presence of sparse training data. To provide insight into the model’s ability of capturing natural biological shape variability, we carry out an analysis of specificity and generalization ability. Y1 - 2019 U6 - https://doi.org/10.1007/978-3-030-32251-9_3 VL - 11767 SP - 21 EP - 29 PB - Springer ER - TY - JOUR A1 - Hildebrandt, Thomas A1 - Bruening, Jan Joris A1 - Schmidt, Nora Laura A1 - Lamecker, Hans A1 - Heppt, Werner A1 - Zachow, Stefan A1 - Goubergrits, Leonid T1 - The Healthy Nasal Cavity - Characteristics of Morphology and Related Airflow Based on a Statistical Shape Model Viewed from a Surgeon’s Perspective JF - Facial Plastic Surgery N2 - Functional surgery on the nasal framework requires referential criteria to objectively assess nasal breathing for indication and follow-up. Thismotivated us to generate amean geometry of the nasal cavity based on a statistical shape model. In this study, the authors could demonstrate that the introduced nasal cavity’s mean geometry features characteristics of the inner shape and airflow, which are commonly observed in symptom-free subjects. Therefore, the mean geometry might serve as a reference-like model when one considers qualitative aspects. However, to facilitate quantitative considerations and statistical inference, further research is necessary. Additionally, the authorswere able to obtain details about the importance of the isthmus nasi and the inferior turbinate for the intranasal airstream. KW - statistical shape model KW - nasal cavity KW - nasal breathing KW - nasal airflow KW - isthmus nasi KW - inferior turbinate Y1 - 2019 U6 - https://doi.org/10.1055/s-0039-1677721 VL - 35 IS - 1 SP - 9 EP - 13 ER - TY - JOUR A1 - Hildebrandt, Thomas A1 - Bruening, Jan Joris A1 - Lamecker, Hans A1 - Zachow, Stefan A1 - Heppt, Werner A1 - Schmidt, Nora A1 - Goubergrits, Leonid T1 - Digital Analysis of Nasal Airflow Facilitating Decision Support in Rhinosurgery JF - Facial Plastic Surgery N2 - Successful functional surgery on the nasal framework requires reliable and comprehensive diagnosis. In this regard, the authors introduce a new methodology: Digital Analysis of Nasal Airflow (diANA). It is based on computational fluid dynamics, a statistical shape model of the healthy nasal cavity and rhinologic expertise. diANA necessitates an anonymized tomographic dataset of the paranasal sinuses including the complete nasal cavity and, when available, clinical information. The principle of diANA is to compare the morphology and the respective airflow of an individual nose with those of a reference. This enablesmorphometric aberrations and consecutive flow field anomalies to localize and quantify within a patient’s nasal cavity. Finally, an elaborated expert opinion with instructive visualizations is provided. Using diANA might support surgeons in decision-making, avoiding unnecessary surgery, gaining more precision, and target-orientation for indicated operations. KW - nasal airflow simulation KW - nasal breathing KW - statistical shape model KW - diANA KW - nasal obstruction KW - rhinorespiratory homeostasis Y1 - 2019 U6 - https://doi.org/10.1055/s-0039-1677720 VL - 35 IS - 1 SP - 1 EP - 8 ER - TY - CHAP A1 - Ambellan, Felix A1 - Lamecker, Hans A1 - von Tycowicz, Christoph A1 - Zachow, Stefan ED - Rea, Paul M. T1 - Statistical Shape Models - Understanding and Mastering Variation in Anatomy T2 - Biomedical Visualisation N2 - In our chapter we are describing how to reconstruct three-dimensional anatomy from medical image data and how to build Statistical 3D Shape Models out of many such reconstructions yielding a new kind of anatomy that not only allows quantitative analysis of anatomical variation but also a visual exploration and educational visualization. Future digital anatomy atlases will not only show a static (average) anatomy but also its normal or pathological variation in three or even four dimensions, hence, illustrating growth and/or disease progression. Statistical Shape Models (SSMs) are geometric models that describe a collection of semantically similar objects in a very compact way. SSMs represent an average shape of many three-dimensional objects as well as their variation in shape. The creation of SSMs requires a correspondence mapping, which can be achieved e.g. by parameterization with a respective sampling. If a corresponding parameterization over all shapes can be established, variation between individual shape characteristics can be mathematically investigated. We will explain what Statistical Shape Models are and how they are constructed. Extensions of Statistical Shape Models will be motivated for articulated coupled structures. In addition to shape also the appearance of objects will be integrated into the concept. Appearance is a visual feature independent of shape that depends on observers or imaging techniques. Typical appearances are for instance the color and intensity of a visual surface of an object under particular lighting conditions, or measurements of material properties with computed tomography (CT) or magnetic resonance imaging (MRI). A combination of (articulated) statistical shape models with statistical models of appearance lead to articulated Statistical Shape and Appearance Models (a-SSAMs).After giving various examples of SSMs for human organs, skeletal structures, faces, and bodies, we will shortly describe clinical applications where such models have been successfully employed. Statistical Shape Models are the foundation for the analysis of anatomical cohort data, where characteristic shapes are correlated to demographic or epidemiologic data. SSMs consisting of several thousands of objects offer, in combination with statistical methods ormachine learning techniques, the possibility to identify characteristic clusters, thus being the foundation for advanced diagnostic disease scoring. Y1 - 2019 SN - 978-3-030-19384-3 SN - 978-3-030-19385-0 U6 - https://doi.org/10.1007/978-3-030-19385-0_5 VL - 3 IS - 1156 SP - 67 EP - 84 PB - Springer Nature Switzerland AG ET - 1 ER - TY - CHAP A1 - Ambellan, Felix A1 - Tack, Alexander A1 - Ehlke, Moritz A1 - Zachow, Stefan T1 - Automated Segmentation of Knee Bone and Cartilage combining Statistical Shape Knowledge and Convolutional Neural Networks: Data from the Osteoarthritis Initiative T2 - Medical Imaging with Deep Learning N2 - We present a method for the automated segmentation of knee bones and cartilage from magnetic resonance imaging, that combines a priori knowledge of anatomical shape with Convolutional Neural Networks (CNNs). The proposed approach incorporates 3D Statistical Shape Models (SSMs) as well as 2D and 3D CNNs to achieve a robust and accurate segmentation of even highly pathological knee structures. The method is evaluated on data of the MICCAI grand challenge "Segmentation of Knee Images 2010". For the first time an accuracy equivalent to the inter-observer variability of human readers has been achieved in this challenge. Moreover, the quality of the proposed method is thoroughly assessed using various measures for 507 manual segmentations of bone and cartilage, and 88 additional manual segmentations of cartilage. Our method yields sub-voxel accuracy. In conclusion, combining of anatomical knowledge using SSMs with localized classification via CNNs results in a state-of-the-art segmentation method. Y1 - 2018 ER - TY - GEN A1 - Sahu, Manish A1 - Dill, Sabrina A1 - Mukhopadyay, Anirban A1 - Zachow, Stefan T1 - Surgical Tool Presence Detection for Cataract Procedures N2 - This article outlines the submission to the CATARACTS challenge for automatic tool presence detection [1]. Our approach for this multi-label classification problem comprises labelset-based sampling, a CNN architecture and temporal smothing as described in [3], which we call ZIB-Res-TS. T3 - ZIB-Report - 18-28 Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-69110 SN - 1438-0064 ER - TY - JOUR A1 - Al Hajj, Hassan A1 - Sahu, Manish A1 - Lamard, Mathieu A1 - Conze, Pierre-Henri A1 - Roychowdhury, Soumali A1 - Hu, Xiaowei A1 - Marsalkaite, Gabija A1 - Zisimopoulos, Odysseas A1 - Dedmari, Muneer Ahmad A1 - Zhao, Fenqiang A1 - Prellberg, Jonas A1 - Galdran, Adrian A1 - Araujo, Teresa A1 - Vo, Duc My A1 - Panda, Chandan A1 - Dahiya, Navdeep A1 - Kondo, Satoshi A1 - Bian, Zhengbing A1 - Bialopetravicius, Jonas A1 - Qiu, Chenghui A1 - Dill, Sabrina A1 - Mukhopadyay, Anirban A1 - Costa, Pedro A1 - Aresta, Guilherme A1 - Ramamurthy, Senthil A1 - Lee, Sang-Woong A1 - Campilho, Aurelio A1 - Zachow, Stefan A1 - Xia, Shunren A1 - Conjeti, Sailesh A1 - Armaitis, Jogundas A1 - Heng, Pheng-Ann A1 - Vahdat, Arash A1 - Cochener, Beatrice A1 - Quellec, Gwenole T1 - CATARACTS: Challenge on Automatic Tool Annotation for cataRACT Surgery JF - Medical Image Analysis N2 - Surgical tool detection is attracting increasing attention from the medical image analysis community. The goal generally is not to precisely locate tools in images, but rather to indicate which tools are being used by the surgeon at each instant. The main motivation for annotating tool usage is to design efficient solutions for surgical workflow analysis, with potential applications in report generation, surgical training and even real-time decision support. Most existing tool annotation algorithms focus on laparoscopic surgeries. However, with 19 million interventions per year, the most common surgical procedure in the world is cataract surgery. The CATARACTS challenge was organized in 2017 to evaluate tool annotation algorithms in the specific context of cataract surgery. It relies on more than nine hours of videos, from 50 cataract surgeries, in which the presence of 21 surgical tools was manually annotated by two experts. With 14 participating teams, this challenge can be considered a success. As might be expected, the submitted solutions are based on deep learning. This paper thoroughly evaluates these solutions: in particular, the quality of their annotations are compared to that of human interpretations. Next, lessons learnt from the differential analysis of these solutions are discussed. We expect that they will guide the design of efficient surgery monitoring tools in the near future. Y1 - 2019 U6 - https://doi.org/10.1016/j.media.2018.11.008 N1 - Best paper award - Computer Graphics Night 2020 (TU Darmstadt) VL - 52 IS - 2 SP - 24 EP - 41 PB - Elsevier ER - TY - GEN A1 - Tack, Alexander A1 - Zachow, Stefan T1 - Accurate Automated Volumetry of Cartilage of the Knee using Convolutional Neural Networks: Data from the Osteoarthritis Initiative N2 - Volumetry of the cartilage of the knee, as needed for the assessment of knee osteoarthritis (KOA), is typically performed in a tedious and subjective process. We present an automated segmentation-based method for the quantification of cartilage volume by employing 3D Convolutional Neural Networks (CNNs). CNNs were trained in a supervised manner using magnetic resonance imaging data as well as cartilage volumetry readings given by clinical experts for 1378 subjects. It was shown that 3D CNNs can be employed for cartilage volumetry with an accuracy similar to expert volumetry readings. In future, accurate automated cartilage volumetry might support both, diagnosis of KOA as well as assessment of KOA progression via longitudinal analysis. T3 - ZIB-Report - 19-05 KW - Deep Learning KW - imaging biomarker KW - radiomics KW - cartilage morphometry KW - volume assessment Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-71439 SN - 1438-0064 ER - TY - GEN A1 - Ambellan, Felix A1 - Tack, Alexander A1 - Ehlke, Moritz A1 - Zachow, Stefan T1 - Automated Segmentation of Knee Bone and Cartilage combining Statistical Shape Knowledge and Convolutional Neural Networks: Data from the Osteoarthritis Initiative N2 - We present a method for the automated segmentation of knee bones and cartilage from magnetic resonance imaging (MRI) that combines a priori knowledge of anatomical shape with Convolutional Neural Networks (CNNs).The proposed approach incorporates 3D Statistical Shape Models (SSMs) as well as 2D and 3D CNNs to achieve a robust and accurate segmentation of even highly pathological knee structures.The shape models and neural networks employed are trained using data from the Osteoarthritis Initiative (OAI) and the MICCAI grand challenge "Segmentation of Knee Images 2010" (SKI10), respectively. We evaluate our method on 40 validation and 50 submission datasets from the SKI10 challenge.For the first time, an accuracy equivalent to the inter-observer variability of human readers is achieved in this challenge.Moreover, the quality of the proposed method is thoroughly assessed using various measures for data from the OAI, i.e. 507 manual segmentations of bone and cartilage, and 88 additional manual segmentations of cartilage. Our method yields sub-voxel accuracy for both OAI datasets. We make the 507 manual segmentations as well as our experimental setup publicly available to further aid research in the field of medical image segmentation.In conclusion, combining localized classification via CNNs with statistical anatomical knowledge via SSMs results in a state-of-the-art segmentation method for knee bones and cartilage from MRI data. T3 - ZIB-Report - 19-06 Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-72704 SN - 1438-0064 N1 - Innovation Excellence Award 2020 ER - TY - JOUR A1 - Brüning, Jan A1 - Hildebrandt, Thomas A1 - Heppt, Werner A1 - Schmidt, Nora A1 - Lamecker, Hans A1 - Szengel, Angelika A1 - Amiridze, Natalja A1 - Ramm, Heiko A1 - Bindernagel, Matthias A1 - Zachow, Stefan A1 - Goubergrits, Leonid T1 - Characterization of the Airflow within an Average Geometry of the Healthy Human Nasal Cavity JF - Scientific Reports N2 - This study’s objective was the generation of a standardized geometry of the healthy nasal cavity. An average geometry of the healthy nasal cavity was generated using a statistical shape model based on 25 symptom-free subjects. Airflow within the average geometry and these geometries was calculated using fluid simulations. Integral measures of the nasal resistance, wall shear stresses (WSS) and velocities were calculated as well as cross-sectional areas (CSA). Furthermore, individual WSS and static pressure distributions were mapped onto the average geometry. The average geometry featured an overall more regular shape that resulted in less resistance, reduced wall shear stresses and velocities compared to the median of the 25 geometries. Spatial distributions of WSS and pressure of average geometry agreed well compared to the average distributions of all individual geometries. The minimal CSA of the average geometry was larger than the median of all individual geometries (83.4 vs. 74.7 mm²). The airflow observed within the average geometry of the healthy nasal cavity did not equal the average airflow of the individual geometries. While differences observed for integral measures were notable, the calculated values for the average geometry lay within the distributions of the individual parameters. Spatially resolved parameters differed less prominently. Y1 - 2020 UR - https://rdcu.be/b2irD U6 - https://doi.org/10.1038/s41598-020-60755-3 VL - 3755 IS - 10 ER - TY - JOUR A1 - Tack, Alexander A1 - Mukhopadhyay, Anirban A1 - Zachow, Stefan T1 - Knee Menisci Segmentation using Convolutional Neural Networks: Data from the Osteoarthritis Initiative JF - Osteoarthritis and Cartilage N2 - Abstract: Objective: To present a novel method for automated segmentation of knee menisci from MRIs. To evaluate quantitative meniscal biomarkers for osteoarthritis (OA) estimated thereof. Method: A segmentation method employing convolutional neural networks in combination with statistical shape models was developed. Accuracy was evaluated on 88 manual segmentations. Meniscal volume, tibial coverage, and meniscal extrusion were computed and tested for differences between groups of OA, joint space narrowing (JSN), and WOMAC pain. Correlation between computed meniscal extrusion and MOAKS experts' readings was evaluated for 600 subjects. Suitability of biomarkers for predicting incident radiographic OA from baseline to 24 months was tested on a group of 552 patients (184 incident OA, 386 controls) by performing conditional logistic regression. Results: Segmentation accuracy measured as Dice Similarity Coefficient was 83.8% for medial menisci (MM) and 88.9% for lateral menisci (LM) at baseline, and 83.1% and 88.3% at 12-month follow-up. Medial tibial coverage was significantly lower for arthritic cases compared to non-arthritic ones. Medial meniscal extrusion was significantly higher for arthritic knees. A moderate correlation between automatically computed medial meniscal extrusion and experts' readings was found (ρ=0.44). Mean medial meniscal extrusion was significantly greater for incident OA cases compared to controls (1.16±0.93 mm vs. 0.83±0.92 mm; p<0.05). Conclusion: Especially for medial menisci an excellent segmentation accuracy was achieved. Our meniscal biomarkers were validated by comparison to experts' readings as well as analysis of differences w.r.t groups of OA, JSN, and WOMAC pain. It was confirmed that medial meniscal extrusion is a predictor for incident OA. Y1 - 2018 U6 - https://doi.org/10.1016/j.joca.2018.02.907 VL - 26 IS - 5 SP - 680 EP - 688 ER - TY - GEN A1 - Tack, Alexander A1 - Mukhopadhyay, Anirban A1 - Zachow, Stefan T1 - Knee Menisci Segmentation using Convolutional Neural Networks: Data from the Osteoarthritis Initiative (Supplementary Material) N2 - Abstract: Objective: To present a novel method for automated segmentation of knee menisci from MRIs. To evaluate quantitative meniscal biomarkers for osteoarthritis (OA) estimated thereof. Method: A segmentation method employing convolutional neural networks in combination with statistical shape models was developed. Accuracy was evaluated on 88 manual segmentations. Meniscal volume, tibial coverage, and meniscal extrusion were computed and tested for differences between groups of OA, joint space narrowing (JSN), and WOMAC pain. Correlation between computed meniscal extrusion and MOAKS experts' readings was evaluated for 600 subjects. Suitability of biomarkers for predicting incident radiographic OA from baseline to 24 months was tested on a group of 552 patients (184 incident OA, 386 controls) by performing conditional logistic regression. Results: Segmentation accuracy measured as Dice Similarity Coefficient was 83.8% for medial menisci (MM) and 88.9% for lateral menisci (LM) at baseline, and 83.1% and 88.3% at 12-month follow-up. Medial tibial coverage was significantly lower for arthritic cases compared to non-arthritic ones. Medial meniscal extrusion was significantly higher for arthritic knees. A moderate correlation between automatically computed medial meniscal extrusion and experts' readings was found (ρ=0.44). Mean medial meniscal extrusion was significantly greater for incident OA cases compared to controls (1.16±0.93 mm vs. 0.83±0.92 mm; p<0.05). Conclusion: Especially for medial menisci an excellent segmentation accuracy was achieved. Our meniscal biomarkers were validated by comparison to experts' readings as well as analysis of differences w.r.t groups of OA, JSN, and WOMAC pain. It was confirmed that medial meniscal extrusion is a predictor for incident OA. Y1 - 2018 U6 - https://doi.org/10.12752/4.TMZ.1.0 N1 - Supplementary data to reproduce results from the related publication, including convolutional neural networks' weights. ER - TY - GEN A1 - Ambellan, Felix A1 - Tack, Alexander A1 - Ehlke, Moritz A1 - Zachow, Stefan T1 - Automated Segmentation of Knee Bone and Cartilage combining Statistical Shape Knowledge and Convolutional Neural Networks: Data from the Osteoarthritis Initiative (Supplementary Material) T2 - Medical Image Analysis N2 - We present a method for the automated segmentation of knee bones and cartilage from magnetic resonance imaging that combines a priori knowledge of anatomical shape with Convolutional Neural Networks (CNNs). The proposed approach incorporates 3D Statistical Shape Models (SSMs) as well as 2D and 3D CNNs to achieve a robust and accurate segmentation of even highly pathological knee structures. The shape models and neural networks employed are trained using data of the Osteoarthritis Initiative (OAI) and the MICCAI grand challenge "Segmentation of Knee Images 2010" (SKI10), respectively. We evaluate our method on 40 validation and 50 submission datasets of the SKI10 challenge. For the first time, an accuracy equivalent to the inter-observer variability of human readers has been achieved in this challenge. Moreover, the quality of the proposed method is thoroughly assessed using various measures for data from the OAI, i.e. 507 manual segmentations of bone and cartilage, and 88 additional manual segmentations of cartilage. Our method yields sub-voxel accuracy for both OAI datasets. We made the 507 manual segmentations as well as our experimental setup publicly available to further aid research in the field of medical image segmentation. In conclusion, combining statistical anatomical knowledge via SSMs with the localized classification via CNNs results in a state-of-the-art segmentation method for knee bones and cartilage from MRI data. Y1 - 2019 U6 - https://doi.org/10.12752/4.ATEZ.1.0 N1 - OAI-ZIB dataset VL - 52 IS - 2 SP - 109 EP - 118 ER - TY - JOUR A1 - Hoffmann, Rene A1 - Lemanis, Robert A1 - Wulff, Lena A1 - Zachow, Stefan A1 - Lukeneder, Alexander A1 - Klug, Christian A1 - Keupp, Helmut T1 - Traumatic events in the life of the deep-sea cephalopod mollusc, the coleoid Spirula spirula JF - ScienceDirect: Deep Sea Research Part I - Oceanographic Research N2 - Here, we report on different types of shell pathologies of the enigmatic deep-sea (mesopelagic) cephalopod Spirula spirula. For the first time, we apply non-invasive imaging methods to: document trauma-induced changes in shell shapes, reconstruct the different causes and effects of these pathologies, unravel the etiology, and attempt to quantify the efficiency of the buoyancy apparatus. We have analysed 2D and 3D shell parameters from eleven shells collected as beach findings from the Canary Islands (Gran Canaria and Fuerteventura), West-Australia, and the Maldives. All shells were scanned with a nanotom-m computer tomograph. Seven shells were likely injured by predator attacks: fishes, cephalopods or crustaceans, one specimen was infested by an endoparasite (potentially Digenea) and one shell shows signs of inflammation and one shell shows large fluctuations of chamber volumes without any signs of pathology. These fluctuations are potential indicators of a stressed environment. Pathological shells represent the most deviant morphologies of a single species and can therefore be regarded as morphological end-members. The changes in the shell volume / chamber volume ratio were assessed in order to evaluate the functional tolerance of the buoyancy apparatus showing that these had little effect. Y1 - 2018 U6 - https://doi.org/10.1016/j.dsr.2018.10.007 VL - 142 IS - 12 SP - 127 EP - 144 ER - TY - JOUR A1 - Ambellan, Felix A1 - Tack, Alexander A1 - Ehlke, Moritz A1 - Zachow, Stefan T1 - Automated Segmentation of Knee Bone and Cartilage combining Statistical Shape Knowledge and Convolutional Neural Networks: Data from the Osteoarthritis Initiative JF - Medical Image Analysis N2 - We present a method for the automated segmentation of knee bones and cartilage from magnetic resonance imaging that combines a priori knowledge of anatomical shape with Convolutional Neural Networks (CNNs). The proposed approach incorporates 3D Statistical Shape Models (SSMs) as well as 2D and 3D CNNs to achieve a robust and accurate segmentation of even highly pathological knee structures. The shape models and neural networks employed are trained using data of the Osteoarthritis Initiative (OAI) and the MICCAI grand challenge "Segmentation of Knee Images 2010" (SKI10), respectively. We evaluate our method on 40 validation and 50 submission datasets of the SKI10 challenge. For the first time, an accuracy equivalent to the inter-observer variability of human readers has been achieved in this challenge. Moreover, the quality of the proposed method is thoroughly assessed using various measures for data from the OAI, i.e. 507 manual segmentations of bone and cartilage, and 88 additional manual segmentations of cartilage. Our method yields sub-voxel accuracy for both OAI datasets. We made the 507 manual segmentations as well as our experimental setup publicly available to further aid research in the field of medical image segmentation. In conclusion, combining statistical anatomical knowledge via SSMs with the localized classification via CNNs results in a state-of-the-art segmentation method for knee bones and cartilage from MRI data. Y1 - 2019 U6 - https://doi.org/10.1016/j.media.2018.11.009 VL - 52 IS - 2 SP - 109 EP - 118 ER - TY - JOUR A1 - Li, Jianning A1 - Pimentel, Pedro A1 - Szengel, Angelika A1 - Ehlke, Moritz A1 - Lamecker, Hans A1 - Zachow, Stefan A1 - Estacio, Laura A1 - Doenitz, Christian A1 - Ramm, Heiko A1 - Shi, Haochen A1 - Chen, Xiaojun A1 - Matzkin, Franco A1 - Newcombe, Virginia A1 - Ferrante, Enzo A1 - Jin, Yuan A1 - Ellis, David G. A1 - Aizenberg, Michele R. A1 - Kodym, Oldrich A1 - Spanel, Michal A1 - Herout, Adam A1 - Mainprize, James G. A1 - Fishman, Zachary A1 - Hardisty, Michael R. A1 - Bayat, Amirhossein A1 - Shit, Suprosanna A1 - Wang, Bomin A1 - Liu, Zhi A1 - Eder, Matthias A1 - Pepe, Antonio A1 - Gsaxner, Christina A1 - Alves, Victor A1 - Zefferer, Ulrike A1 - von Campe, Cord A1 - Pistracher, Karin A1 - Schäfer, Ute A1 - Schmalstieg, Dieter A1 - Menze, Bjoern H. A1 - Glocker, Ben A1 - Egger, Jan T1 - AutoImplant 2020 - First MICCAI Challenge on Automatic Cranial Implant Design JF - IEEE Transactions on Medical Imaging N2 - The aim of this paper is to provide a comprehensive overview of the MICCAI 2020 AutoImplant Challenge. The approaches and publications submitted and accepted within the challenge will be summarized and reported, highlighting common algorithmic trends and algorithmic diversity. Furthermore, the evaluation results will be presented, compared and discussed in regard to the challenge aim: seeking for low cost, fast and fully automated solutions for cranial implant design. Based on feedback from collaborating neurosurgeons, this paper concludes by stating open issues and post-challenge requirements for intra-operative use. Y1 - 2021 U6 - https://doi.org/10.1109/TMI.2021.3077047 SN - 0278-0062 VL - 40 IS - 9 SP - 2329 EP - 2342 ER - TY - JOUR A1 - Picht, Thomas A1 - Le Calve, Maxime A1 - Tomasello, Rosario A1 - Fekonja, Lucius A1 - Gholami, Mohammad Fardin A1 - Bruhn, Matthias A1 - Zwick, Carola A1 - Rabe, Jürgen P. A1 - Müller-Birn, Claudia A1 - Vajkoczy, Peter A1 - Sauer, Igor M. A1 - Zachow, Stefan A1 - Nyakatura, John A. A1 - Ribault, Patricia A1 - Pulvermüller, Friedemann T1 - A note on neurosurgical resection and why we need to rethink cutting JF - Neurosurgery Y1 - 2021 U6 - https://doi.org/10.1093/neuros/nyab326 VL - 89 IS - 5 SP - 289 EP - 291 ER - TY - GEN A1 - Tack, Alexander A1 - Ambellan, Felix A1 - Zachow, Stefan T1 - Towards novel osteoarthritis biomarkers: Multi-criteria evaluation of 46,996 segmented knee MRI data from the Osteoarthritis Initiative (Supplementary Material) T2 - PLOS One N2 - Convolutional neural networks (CNNs) are the state-of-the-art for automated assessment of knee osteoarthritis (KOA) from medical image data. However, these methods lack interpretability, mainly focus on image texture, and cannot completely grasp the analyzed anatomies’ shapes. In this study we assess the informative value of quantitative features derived from segmentations in order to assess their potential as an alternative or extension to CNN-based approaches regarding multiple aspects of KOA A fully automated method is employed to segment six anatomical structures around the knee (femoral and tibial bones, femoral and tibial cartilages, and both menisci) in 46,996 MRI scans. Based on these segmentations, quantitative features are computed, i.e., measurements such as cartilage volume, meniscal extrusion and tibial coverage, as well as geometric features based on a statistical shape encoding of the anatomies. The feature quality is assessed by investigating their association to the Kellgren-Lawrence grade (KLG), joint space narrowing (JSN), incident KOA, and total knee replacement (TKR). Using gold standard labels from the Osteoarthritis Initiative database the balanced accuracy (BA), the area under the Receiver Operating Characteristic curve (AUC), and weighted kappa statistics are evaluated. Features based on shape encodings of femur, tibia, and menisci plus the performed measurements showed most potential as KOA biomarkers. Differentiation between healthy and severely arthritic knees yielded BAs of up to 99%, 84% were achieved for diagnosis of early KOA. Substantial agreement with weighted kappa values of 0.73, 0.73, and 0.79 were achieved for classification of the grade of medial JSN, lateral JSN, and KLG, respectively. The AUC was 0.60 and 0.75 for prediction of incident KOA and TKR within 5 years, respectively. Quantitative features from automated segmentations yield excellent results for KLG and JSN classification and show potential for incident KOA and TKR prediction. The validity of these features as KOA biomarkers should be further evaluated, especially as extensions of CNN-based approaches. To foster such developments we make all segmentations publicly available together with this publication. Y1 - 2021 U6 - https://doi.org/10.12752/8328 N1 - 46,996 automated segmentations for data from the OAI database. VL - 16 IS - 10 ER - TY - GEN A1 - Tack, Alexander A1 - Mukhopadhyay, Anirban A1 - Zachow, Stefan T1 - Knee Menisci Segmentation using Convolutional Neural Networks: Data from the Osteoarthritis Initiative N2 - Abstract: Objective: To present a novel method for automated segmentation of knee menisci from MRIs. To evaluate quantitative meniscal biomarkers for osteoarthritis (OA) estimated thereof. Method: A segmentation method employing convolutional neural networks in combination with statistical shape models was developed. Accuracy was evaluated on 88 manual segmentations. Meniscal volume, tibial coverage, and meniscal extrusion were computed and tested for differences between groups of OA, joint space narrowing (JSN), and WOMAC pain. Correlation between computed meniscal extrusion and MOAKS experts' readings was evaluated for 600 subjects. Suitability of biomarkers for predicting incident radiographic OA from baseline to 24 months was tested on a group of 552 patients (184 incident OA, 386 controls) by performing conditional logistic regression. Results: Segmentation accuracy measured as Dice Similarity Coefficient was 83.8% for medial menisci (MM) and 88.9% for lateral menisci (LM) at baseline, and 83.1% and 88.3% at 12-month follow-up. Medial tibial coverage was significantly lower for arthritic cases compared to non-arthritic ones. Medial meniscal extrusion was significantly higher for arthritic knees. A moderate correlation between automatically computed medial meniscal extrusion and experts' readings was found (ρ=0.44). Mean medial meniscal extrusion was significantly greater for incident OA cases compared to controls (1.16±0.93 mm vs. 0.83±0.92 mm; p<0.05). Conclusion: Especially for medial menisci an excellent segmentation accuracy was achieved. Our meniscal biomarkers were validated by comparison to experts' readings as well as analysis of differences w.r.t groups of OA, JSN, and WOMAC pain. It was confirmed that medial meniscal extrusion is a predictor for incident OA. T3 - ZIB-Report - 18-15 Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-68038 SN - 1438-0064 VL - 26 IS - 5 SP - 680 EP - 688 ER - TY - GEN A1 - Ehlke, Moritz A1 - Ramm, Heiko A1 - Lamecker, Hans A1 - Hege, Hans-Christian A1 - Zachow, Stefan T1 - Fast Generation of Virtual X-ray Images from Deformable Tetrahedral Meshes N2 - We propose a novel GPU-based approach to render virtual X-ray projections of deformable tetrahedral meshes. These meshes represent the shape and the internal density distribution of a particular anatomical structure and are derived from statistical shape and intensity models (SSIMs). We apply our method to improve the geometric reconstruction of 3D anatomy (e.g.\ pelvic bone) from 2D X-ray images. For that purpose, shape and density of a tetrahedral mesh are varied and virtual X-ray projections are generated within an optimization process until the similarity between the computed virtual X-ray and the respective anatomy depicted in a given clinical X-ray is maximized. The OpenGL implementation presented in this work deforms and projects tetrahedral meshes of high resolution (200.000+ tetrahedra) at interactive rates. It generates virtual X-rays that accurately depict the density distribution of an anatomy of interest. Compared to existing methods that accumulate X-ray attenuation in deformable meshes, our novel approach significantly boosts the deformation/projection performance. The proposed projection algorithm scales better with respect to mesh resolution and complexity of the density distribution, and the combined deformation and projection on the GPU scales better with respect to the number of deformation parameters. The gain in performance allows for a larger number of cycles in the optimization process. Consequently, it reduces the risk of being stuck in a local optimum. We believe that our approach contributes in orthopedic surgery, where 3D anatomy information needs to be extracted from 2D X-rays to support surgeons in better planning joint replacements. T3 - ZIB-Report - 13-38 KW - digitally reconstructed radiographs KW - volume rendering KW - mesh deformation KW - statistical shape and intensity models KW - image registration KW - GPU acceleration Y1 - 2013 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-41896 SN - 1438-0064 ER - TY - GEN A1 - Ehlke, Moritz A1 - Frenzel, Thomas A1 - Ramm, Heiko A1 - Lamecker, Hans A1 - Akbari Shandiz, Mohsen A1 - Anglin, Carolyn A1 - Zachow, Stefan T1 - Robust Measurement of Natural Acetabular Orientation from AP Radiographs using Articulated 3D Shape and Intensity Models T3 - ZIB-Report - 14-12 KW - articulated shape and intensity models KW - 3D reconstruction KW - acetabular orientation KW - image registration Y1 - 2014 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-49824 SN - 1438-0064 ER - TY - GEN A1 - Ramm, Heiko A1 - Morillo Victoria, Oscar Salvador A1 - Todt, Ingo A1 - Schirmacher, Hartmut A1 - Ernst, Arneborg A1 - Zachow, Stefan A1 - Lamecker, Hans T1 - Visual Support for Positioning Hearing Implants N2 - We present a software planning tool that provides intuitive visual feedback for finding suitable positions of hearing implants in the human temporal bone. After an automatic reconstruction of the temporal bone anatomy the tool pre-positions the implant and allows the user to adjust its position interactively with simple 2D dragging and rotation operations on the bone's surface. During this procedure, visual elements like warning labels on the implant or color encoded bone density information on the bone geometry provide guidance for the determination of a suitable fit. T3 - ZIB-Report - 13-53 KW - bone anchored hearing implant KW - surgery planning KW - segmentation KW - visualization Y1 - 2013 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-42495 SN - 1438-0064 ER - TY - GEN A1 - Stalling, Detlev A1 - Seebass, Martin A1 - Zachow, Stefan T1 - Mehrschichtige Oberflächenmodelle zur computergestützten Planung in der Chirurgie N2 - Polygonale Schädelmodelle bilden ein wichtiges Hilfsmittel für computergestützte Planungen im Bereich der plastischen Chirurgie. Wir beschreiben, wie derartige Modelle automatisch aus hochaufgelösten CT-Datensätzen erzeugt werden können. Durch einen lokal steuerbaren Simplifizierungsalgorithmus werden die Modelle so weit vereinfacht, daß auch auf kleineren Graphikcomputern interaktives Arbeiten möglich wird. Die Verwendung eines speziellen Transparenzmodells ermöglicht den ungehinderten Blick auf die bei der Planung relevanten Knochenstrukturen und läßt den Benutzer zugleich die Kopfumrisse des Patienten erkennen. T3 - ZIB-Report - TR-98-05 KW - Isoflächen KW - Simplifizierung KW - Transparenzen Y1 - 1998 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-5661 ER - TY - GEN A1 - Zilske, Michael A1 - Lamecker, Hans A1 - Zachow, Stefan T1 - Adaptive Remeshing of Non-Manifold Surfaces N2 - We present a unified approach for consistent remeshing of arbitrary non-manifold triangle meshes with additional user-defined feature lines, which together form a feature skeleton. Our method is based on local operations only and produces meshes of high regularity and triangle quality while preserving the geometry as well as topology of the feature skeleton and the input mesh. T3 - ZIB-Report - 07-01 KW - remeshing KW - non-manifold KW - mesh quality optimization Y1 - 2007 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-9445 ER - TY - GEN A1 - Hoffmann, René A1 - Schultz, Julia A. A1 - Schellhorn, Rico A1 - Rybacki, Erik A1 - Keupp, Helmut A1 - Lemanis, Robert A1 - Zachow, Stefan T1 - Non-invasive imaging methods applied to neo- and paleo-ontological cephalopod research N2 - Several non-invasive methods are common practice in natural sciences today. Here we present how they can be applied and contribute to current topics in cephalopod (paleo-) biology. Different methods will be compared in terms of time necessary to acquire the data, amount of data, accuracy/resolution, minimum/maximum size of objects that can be studied, the degree of post-processing needed and availability. The main application of the methods is seen in morphometry and volumetry of cephalopod shells. In particular we present a method for precise buoyancy calculation. Therefore, cephalopod shells were scanned together with different reference bodies, an approach developed in medical sciences. It is necessary to know the volume of the reference bodies, which should have similar absorption properties like the object of interest. Exact volumes can be obtained from surface scanning. Depending on the dimensions of the study object different computed tomography techniques were applied. T3 - ZIB-Report - 14-18 Y1 - 2014 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-50300 SN - 1438-0064 ER - TY - GEN A1 - Tycowicz, Christoph von A1 - Ambellan, Felix A1 - Mukhopadhyay, Anirban A1 - Zachow, Stefan T1 - A Riemannian Statistical Shape Model using Differential Coordinates N2 - We propose a novel Riemannian framework for statistical analysis of shapes that is able to account for the nonlinearity in shape variation. By adopting a physical perspective, we introduce a differential representation that puts the local geometric variability into focus. We model these differential coordinates as elements of a Lie group thereby endowing our shape space with a non-Euclidian structure. A key advantage of our framework is that statistics in a manifold shape space become numerically tractable improving performance by several orders of magnitude over state-of-the-art. We show that our Riemannian model is well suited for the identification of intra-population variability as well as inter-population differences. In particular, we demonstrate the superiority of the proposed model in experiments on specificity and generalization ability. We further derive a statistical shape descriptor that outperforms the standard Euclidian approach in terms of shape-based classification of morphological disorders. T3 - ZIB-Report - 16-69 Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-61175 UR - https://opus4.kobv.de/opus4-zib/frontdoor/index/index/docId/6485 SN - 1438-0064 ER - TY - GEN A1 - Ehlke, Moritz A1 - Ramm, Heiko A1 - Lamecker, Hans A1 - Zachow, Stefan T1 - Efficient projection and deformation of volumetric intensity models for accurate simulation of X-ray images N2 - We present an efficient GPU-based method to generate virtual X-ray images from tetrahedral meshes which are associated with attenuation values. In addition, a novel approach is proposed that performs the model deformation on the GPU. The tetrahedral grids are derived from volumetric statistical shape and intensity models (SSIMs) and describe anatomical structures. Our research targets at reconstructing 3D anatomical shapes by comparing virtual X-ray images generated using our novel approach with clinical data while varying the shape and density of the SSIM in an optimization process. We assume that a deformed SSIM adequately represents an anatomy of interest when the similarity between the virtual and the clinical X-ray image is maximized. The OpenGL implementation presented here generates accurate (virtual) X-ray images at interactive rates, thus qualifying it for its use in the reconstruction process. T3 - ZIB-Report - 12-40 KW - Digitally Reconstructed Radiograph (DRR), Anatomy Reconstruction, Statistical Shape and Intensity Model (SSIM), GPU acceleration Y1 - 2012 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-16580 SN - 1438-0064 ER - TY - GEN A1 - Tack, Alexander A1 - Shestakov, Alexey A1 - Lüdke, David A1 - Zachow, Stefan T1 - A deep multi-task learning method for detection of meniscal tears in MRI data from the Osteoarthritis Initiative database N2 - We present a novel and computationally efficient method for the detection of meniscal tears in Magnetic Resonance Imaging (MRI) data. Our method is based on a Convolutional Neural Network (CNN) that operates on a complete 3D MRI scan. Our approach detects the presence of meniscal tears in three anatomical sub-regions (anterior horn, meniscal body, posterior horn) for both the Medial Meniscus (MM) and the Lateral Meniscus (LM) individually. For optimal performance of our method, we investigate how to preprocess the MRI data or how to train the CNN such that only relevant information within a Region of Interest (RoI) of the data volume is taken into account for meniscal tear detection. We propose meniscal tear detection combined with a bounding box regressor in a multi-task deep learning framework to let the CNN implicitly consider the corresponding RoIs of the menisci. We evaluate the accuracy of our CNN-based meniscal tear detection approach on 2,399 Double Echo Steady-State (DESS) MRI scans from the Osteoarthritis Initiative database. In addition, to show that our method is capable of generalizing to other MRI sequences, we also adapt our model to Intermediate-Weighted Turbo Spin-Echo (IW TSE) MRI scans. To judge the quality of our approaches, Receiver Operating Characteristic (ROC) curves and Area Under the Curve (AUC) values are evaluated for both MRI sequences. For the detection of tears in DESS MRI, our method reaches AUC values of 0.94, 0.93, 0.93 (anterior horn, body, posterior horn) in MM and 0.96, 0.94, 0.91 in LM. For the detection of tears in IW TSE MRI data, our method yields AUC values of 0.84, 0.88, 0.86 in MM and 0.95, 0.91, 0.90 in LM. In conclusion, the presented method achieves high accuracy for detecting meniscal tears in both DESS and IW TSE MRI data. Furthermore, our method can be easily trained and applied to other MRI sequences. T3 - ZIB-Report - 21-33 Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-84415 SN - 1438-0064 ER - TY - JOUR A1 - Tack, Alexander A1 - Shestakov, Alexey A1 - Lüdke, David A1 - Zachow, Stefan T1 - A deep multi-task learning method for detection of meniscal tears in MRI data from the Osteoarthritis Initiative database JF - Frontiers in Bioengineering and Biotechnology, section Biomechanics N2 - We present a novel and computationally efficient method for the detection of meniscal tears in Magnetic Resonance Imaging (MRI) data. Our method is based on a Convolutional Neural Network (CNN) that operates on a complete 3D MRI scan. Our approach detects the presence of meniscal tears in three anatomical sub-regions (anterior horn, meniscal body, posterior horn) for both the Medial Meniscus (MM) and the Lateral Meniscus (LM) individually. For optimal performance of our method, we investigate how to preprocess the MRI data or how to train the CNN such that only relevant information within a Region of Interest (RoI) of the data volume is taken into account for meniscal tear detection. We propose meniscal tear detection combined with a bounding box regressor in a multi-task deep learning framework to let the CNN implicitly consider the corresponding RoIs of the menisci. We evaluate the accuracy of our CNN-based meniscal tear detection approach on 2,399 Double Echo Steady-State (DESS) MRI scans from the Osteoarthritis Initiative database. In addition, to show that our method is capable of generalizing to other MRI sequences, we also adapt our model to Intermediate-Weighted Turbo Spin-Echo (IW TSE) MRI scans. To judge the quality of our approaches, Receiver Operating Characteristic (ROC) curves and Area Under the Curve (AUC) values are evaluated for both MRI sequences. For the detection of tears in DESS MRI, our method reaches AUC values of 0.94, 0.93, 0.93 (anterior horn, body, posterior horn) in MM and 0.96, 0.94, 0.91 in LM. For the detection of tears in IW TSE MRI data, our method yields AUC values of 0.84, 0.88, 0.86 in MM and 0.95, 0.91, 0.90 in LM. In conclusion, the presented method achieves high accuracy for detecting meniscal tears in both DESS and IW TSE MRI data. Furthermore, our method can be easily trained and applied to other MRI sequences. Y1 - 2021 U6 - https://doi.org/10.3389/fbioe.2021.747217 SP - 28 EP - 41 ER - TY - CHAP A1 - Lüdke, David A1 - Amiranashvili, Tamaz A1 - Ambellan, Felix A1 - Ezhov, Ivan A1 - Menze, Bjoern A1 - Zachow, Stefan T1 - Landmark-free Statistical Shape Modeling via Neural Flow Deformations T2 - Medical Image Computing and Computer Assisted Intervention - MICCAI 2022 N2 - Statistical shape modeling aims at capturing shape variations of an anatomical structure that occur within a given population. Shape models are employed in many tasks, such as shape reconstruction and image segmentation, but also shape generation and classification. Existing shape priors either require dense correspondence between training examples or lack robustness and topological guarantees. We present FlowSSM, a novel shape modeling approach that learns shape variability without requiring dense correspondence between training instances. It relies on a hierarchy of continuous deformation flows, which are parametrized by a neural network. Our model outperforms state-of-the-art methods in providing an expressive and robust shape prior for distal femur and liver. We show that the emerging latent representation is discriminative by separating healthy from pathological shapes. Ultimately, we demonstrate its effectiveness on two shape reconstruction tasks from partial data. Our source code is publicly available (https://github.com/davecasp/flowssm). Y1 - 2022 U6 - https://doi.org/10.1007/978-3-031-16434-7_44 VL - 13432 PB - Springer, Cham ER - TY - JOUR A1 - Tack, Alexander A1 - Ambellan, Felix A1 - Zachow, Stefan T1 - Towards novel osteoarthritis biomarkers: Multi-criteria evaluation of 46,996 segmented knee MRI data from the Osteoarthritis Initiative JF - PLOS One N2 - Convolutional neural networks (CNNs) are the state-of-the-art for automated assessment of knee osteoarthritis (KOA) from medical image data. However, these methods lack interpretability, mainly focus on image texture, and cannot completely grasp the analyzed anatomies’ shapes. In this study we assess the informative value of quantitative features derived from segmentations in order to assess their potential as an alternative or extension to CNN-based approaches regarding multiple aspects of KOA. Six anatomical structures around the knee (femoral and tibial bones, femoral and tibial cartilages, and both menisci) are segmented in 46,996 MRI scans. Based on these segmentations, quantitative features are computed, i.e., measurements such as cartilage volume, meniscal extrusion and tibial coverage, as well as geometric features based on a statistical shape encoding of the anatomies. The feature quality is assessed by investigating their association to the Kellgren-Lawrence grade (KLG), joint space narrowing (JSN), incident KOA, and total knee replacement (TKR). Using gold standard labels from the Osteoarthritis Initiative database the balanced accuracy (BA), the area under the Receiver Operating Characteristic curve (AUC), and weighted kappa statistics are evaluated. Features based on shape encodings of femur, tibia, and menisci plus the performed measurements showed most potential as KOA biomarkers. Differentiation between non-arthritic and severely arthritic knees yielded BAs of up to 99%, 84% were achieved for diagnosis of early KOA. Weighted kappa values of 0.73, 0.72, and 0.78 were achieved for classification of the grade of medial JSN, lateral JSN, and KLG, respectively. The AUC was 0.61 and 0.76 for prediction of incident KOA and TKR within one year, respectively. Quantitative features from automated segmentations provide novel biomarkers for KLG and JSN classification and show potential for incident KOA and TKR prediction. The validity of these features should be further evaluated, especially as extensions of CNN- based approaches. To foster such developments we make all segmentations publicly available together with this publication. Y1 - 2021 U6 - https://doi.org/10.1371/journal.pone.0258855 VL - 16 IS - 10 ER - TY - CHAP A1 - Siqueira Rodrigues, Lucas A1 - Nyakatura, John A1 - Zachow, Stefan A1 - Israel, Johann Habakuk T1 - An Immersive Virtual Paleontology Application T2 - 13th International Conference on Human Haptic Sensing and Touch Enabled Computer Applications, EuroHaptics 2022 N2 - Virtual paleontology studies digital fossils through data analysis and visualization systems. The discipline is growing in relevance for the evident advantages of non-destructive imaging techniques over traditional paleontological methods, and it has made significant advancements during the last few decades. However, virtual paleontology still faces a number of technological challenges, amongst which are interaction shortcomings of image segmentation applications. Whereas automated segmentation methods are seldom applicable to fossil datasets, manual exploration of these specimens is extremely time-consuming as it impractically delves into three-dimensional data through two-dimensional visualization and interaction means. This paper presents an application that employs virtual reality and haptics to virtual paleontology in order to evolve its interaction paradigms and address some of its limitations. We provide a brief overview of the challenges faced by virtual paleontology practitioners, a description of our immersive virtual paleontology prototype, and the results of a heuristic evaluation of our design. Y1 - 2022 U6 - https://doi.org/10.1007/978-3-031-06249-0 SP - 478 EP - 481 ER - TY - JOUR A1 - Sekuboyina, Anjany A1 - Husseini, Malek E. A1 - Bayat, Amirhossein A1 - Löffler, Maximilian A1 - Liebl, Hans A1 - Li, Hongwei A1 - Tetteh, Giles A1 - Kukačka, Jan A1 - Payer, Christian A1 - Štern, Darko A1 - Urschler, Martin A1 - Chen, Maodong A1 - Cheng, Dalong A1 - Lessmann, Nikolas A1 - Hu, Yujin A1 - Wang, Tianfu A1 - Yang, Dong A1 - Xu, Daguang A1 - Ambellan, Felix A1 - Amiranashvili, Tamaz A1 - Ehlke, Moritz A1 - Lamecker, Hans A1 - Lehnert, Sebastian A1 - Lirio, Marilia A1 - de Olaguer, Nicolás Pérez A1 - Ramm, Heiko A1 - Sahu, Manish A1 - Tack, Alexander A1 - Zachow, Stefan A1 - Jiang, Tao A1 - Ma, Xinjun A1 - Angerman, Christoph A1 - Wang, Xin A1 - Brown, Kevin A1 - Kirszenberg, Alexandre A1 - Puybareau, Élodie A1 - Chen, Di A1 - Bai, Yiwei A1 - Rapazzo, Brandon H. A1 - Yeah, Timyoas A1 - Zhang, Amber A1 - Xu, Shangliang A1 - Hou, Feng A1 - He, Zhiqiang A1 - Zeng, Chan A1 - Xiangshang, Zheng A1 - Liming, Xu A1 - Netherton, Tucker J. A1 - Mumme, Raymond P. A1 - Court, Laurence E. A1 - Huang, Zixun A1 - He, Chenhang A1 - Wang, Li-Wen A1 - Ling, Sai Ho A1 - Huynh, Lê Duy A1 - Boutry, Nicolas A1 - Jakubicek, Roman A1 - Chmelik, Jiri A1 - Mulay, Supriti A1 - Sivaprakasam, Mohanasankar A1 - Paetzold, Johannes C. A1 - Shit, Suprosanna A1 - Ezhov, Ivan A1 - Wiestler, Benedikt A1 - Glocker, Ben A1 - Valentinitsch, Alexander A1 - Rempfler, Markus A1 - Menze, Björn H. A1 - Kirschke, Jan S. T1 - VerSe: A Vertebrae labelling and segmentation benchmark for multi-detector CT images JF - Medical Image Analysis N2 - Vertebral labelling and segmentation are two fundamental tasks in an automated spine processing pipeline. Reliable and accurate processing of spine images is expected to benefit clinical decision support systems for diagnosis, surgery planning, and population-based analysis of spine and bone health. However, designing automated algorithms for spine processing is challenging predominantly due to considerable variations in anatomy and acquisition protocols and due to a severe shortage of publicly available data. Addressing these limitations, the Large Scale Vertebrae Segmentation Challenge (VerSe) was organised in conjunction with the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) in 2019 and 2020, with a call for algorithms tackling the labelling and segmentation of vertebrae. Two datasets containing a total of 374 multi-detector CT scans from 355 patients were prepared and 4505 vertebrae have individually been annotated at voxel level by a human-machine hybrid algorithm (https://osf.io/nqjyw/, https://osf.io/t98fz/). A total of 25 algorithms were benchmarked on these datasets. In this work, we present the results of this evaluation and further investigate the performance variation at the vertebra level, scan level, and different fields of view. We also evaluate the generalisability of the approaches to an implicit domain shift in data by evaluating the top-performing algorithms of one challenge iteration on data from the other iteration. The principal takeaway from VerSe: the performance of an algorithm in labelling and segmenting a spine scan hinges on its ability to correctly identify vertebrae in cases of rare anatomical variations. The VerSe content and code can be accessed at: https://github.com/anjany/verse. Y1 - 2021 U6 - https://doi.org/10.1016/j.media.2021.102166 VL - 73 ER - TY - JOUR A1 - Glatzeder, Korbinian A1 - Komnik, Igor A1 - Ambellan, Felix A1 - Zachow, Stefan A1 - Potthast, Wolfgang T1 - Dynamic pressure analysis of novel interpositional knee spacer implants in 3D-printed human knee models JF - Scientific Reports N2 - Alternative treatment methods for knee osteoarthritis (OA) are in demand, to delay the young (< 50 Years) patient’s need for osteotomy or knee replacement. Novel interpositional knee spacers shape based on statistical shape model (SSM) approach and made of polyurethane (PU) were developed to present a minimally invasive method to treat medial OA in the knee. The implant should be supposed to reduce peak strains and pain, restore the stability of the knee, correct the malalignment of a varus knee and improve joint function and gait. Firstly, the spacers were tested in artificial knee models. It is assumed that by application of a spacer, a significant reduction in stress values and a significant increase in the contact area in the medial compartment of the knee will be registered. Biomechanical analysis of the effect of novel interpositional knee spacer implants on pressure distribution in 3D-printed knee model replicas: the primary purpose was the medial joint contact stress-related biomechanics. A secondary purpose was a better understanding of medial/lateral redistribution of joint loading. Six 3D printed knee models were reproduced from cadaveric leg computed tomography. Each of four spacer implants was tested in each knee geometry under realistic arthrokinematic dynamic loading conditions, to examine the pressure distribution in the knee joint. All spacers showed reduced mean stress values by 84–88% and peak stress values by 524–704% in the medial knee joint compartment compared to the non-spacer test condition. The contact area was enlarged by 462–627% as a result of the inserted spacers. Concerning the appreciable contact stress reduction and enlargement of the contact area in the medial knee joint compartment, the premises are in place for testing the implants directly on human knee cadavers to gain further insights into a possible tool for treating medial knee osteoarthritis. Y1 - 2022 U6 - https://doi.org/10.1038/s41598-022-20463-6 VL - 12 ER - TY - JOUR A1 - Sekuboyina, Anjany A1 - Bayat, Amirhossein A1 - Husseini, Malek E. A1 - Löffler, Maximilian A1 - Li, Hongwei A1 - Tetteh, Giles A1 - Kukačka, Jan A1 - Payer, Christian A1 - Štern, Darko A1 - Urschler, Martin A1 - Chen, Maodong A1 - Cheng, Dalong A1 - Lessmann, Nikolas A1 - Hu, Yujin A1 - Wang, Tianfu A1 - Yang, Dong A1 - Xu, Daguang A1 - Ambellan, Felix A1 - Amiranashvili, Tamaz A1 - Ehlke, Moritz A1 - Lamecker, Hans A1 - Lehnert, Sebastian A1 - Lirio, Marilia A1 - de Olaguer, Nicolás Pérez A1 - Ramm, Heiko A1 - Sahu, Manish A1 - Tack, Alexander A1 - Zachow, Stefan A1 - Jiang, Tao A1 - Ma, Xinjun A1 - Angerman, Christoph A1 - Wang, Xin A1 - Wei, Qingyue A1 - Brown, Kevin A1 - Wolf, Matthias A1 - Kirszenberg, Alexandre A1 - Puybareau, Élodie A1 - Valentinitsch, Alexander A1 - Rempfler, Markus A1 - Menze, Björn H. A1 - Kirschke, Jan S. T1 - VerSe: A Vertebrae Labelling and Segmentation Benchmark for Multi-detector CT Images JF - arXiv Y1 - 2020 ER - TY - CHAP A1 - Amiranashvili, Tamaz A1 - Lüdke, David A1 - Li, Hongwei A1 - Menze, Bjoern A1 - Zachow, Stefan T1 - Learning Shape Reconstruction from Sparse Measurements with Neural Implicit Functions T2 - Medical Imaging with Deep Learning N2 - Reconstructing anatomical shapes from sparse or partial measurements relies on prior knowledge of shape variations that occur within a given population. Such shape priors are learned from example shapes, obtained by segmenting volumetric medical images. For existing models, the resolution of a learned shape prior is limited to the resolution of the training data. However, in clinical practice, volumetric images are often acquired with highly anisotropic voxel sizes, e.g. to reduce image acquisition time in MRI or radiation exposure in CT imaging. The missing shape information between the slices prohibits existing methods to learn a high-resolution shape prior. We introduce a method for high-resolution shape reconstruction from sparse measurements without relying on high-resolution ground truth for training. Our method is based on neural implicit shape representations and learns a continuous shape prior only from highly anisotropic segmentations. Furthermore, it is able to learn from shapes with a varying field of view and can reconstruct from various sparse input configurations. We demonstrate its effectiveness on two anatomical structures: vertebra and femur, and successfully reconstruct high-resolution shapes from sparse segmentations, using as few as three orthogonal slices. Y1 - 2022 ER - TY - JOUR A1 - Wilson, David A1 - Anglin, Carolyn A1 - Ambellan, Felix A1 - Grewe, Carl Martin A1 - Tack, Alexander A1 - Lamecker, Hans A1 - Dunbar, Michael A1 - Zachow, Stefan T1 - Validation of three-dimensional models of the distal femur created from surgical navigation point cloud data for intraoperative and postoperative analysis of total knee arthroplasty JF - International Journal of Computer Assisted Radiology and Surgery N2 - Purpose: Despite the success of total knee arthroplasty there continues to be a significant proportion of patients who are dissatisfied. One explanation may be a shape mismatch between pre and post-operative distal femurs. The purpose of this study was to investigate a method to match a statistical shape model (SSM) to intra-operatively acquired point cloud data from a surgical navigation system, and to validate it against the pre-operative magnetic resonance imaging (MRI) data from the same patients. Methods: A total of 10 patients who underwent navigated total knee arthroplasty also had an MRI scan less than 2 months pre-operatively. The standard surgical protocol was followed which included partial digitization of the distal femur. Two different methods were employed to fit the SSM to the digitized point cloud data, based on (1) Iterative Closest Points (ICP) and (2) Gaussian Mixture Models (GMM). The available MRI data were manually segmented and the reconstructed three-dimensional surfaces used as ground truth against which the statistical shape model fit was compared. Results: For both approaches, the difference between the statistical shape model-generated femur and the surface generated from MRI segmentation averaged less than 1.7 mm, with maximum errors occurring in less clinically important areas. Conclusion: The results demonstrated good correspondence with the distal femoral morphology even in cases of sparse data sets. Application of this technique will allow for measurement of mismatch between pre and post-operative femurs retrospectively on any case done using the surgical navigation system and could be integrated into the surgical navigation unit to provide real-time feedback. Y1 - 2017 UR - https://link.springer.com/content/pdf/10.1007%2Fs11548-017-1630-5.pdf U6 - https://doi.org/10.1007/s11548-017-1630-5 VL - 12 IS - 12 SP - 2097 EP - 2105 PB - Springer ER - TY - GEN A1 - Grewe, Carl Martin A1 - Zachow, Stefan ED - Doll, Nikola ED - Bredekamp, Horst ED - Schäffner, Wolfgang T1 - Face to Face-Interface T2 - +ultra. Knowledge & Gestaltung Y1 - 2017 SP - 320 EP - 321 PB - Seemann Henschel ER - TY - CHAP A1 - Grewe, Carl Martin A1 - Zachow, Stefan T1 - Fully Automated and Highly Accurate Dense Correspondence for Facial Surfaces T2 - Computer Vision – ECCV 2016 Workshops N2 - We present a novel framework for fully automated and highly accurate determination of facial landmarks and dense correspondence, e.g. a topologically identical mesh of arbitrary resolution, across the entire surface of 3D face models. For robustness and reliability of the proposed approach, we are combining 2D landmark detectors and 3D statistical shape priors with a variational matching method. Instead of matching faces in the spatial domain only, we employ image registration to align the 2D parametrization of the facial surface to a planar template we call the Unified Facial Parameter Domain (ufpd). This allows us to simultaneously match salient photometric and geometric facial features using robust image similarity measures while reasonably constraining geometric distortion in regions with less significant features. We demonstrate the accuracy of the dense correspondence established by our framework on the BU3DFE database with 2500 facial surfaces and show, that our framework outperforms current state-of-the-art methods with respect to the fully automated location of facial landmarks. Y1 - 2016 U6 - https://doi.org/10.1007/978-3-319-48881-3_38 VL - 9914 SP - 552 EP - 568 PB - Springer International Publishing ER - TY - GEN A1 - Wilson, David A1 - Bücher, Pia A1 - Grewe, Carl Martin A1 - Anglin, Carolyn A1 - Zachow, Stefan A1 - Michael, Dunbar T1 - Validation of Three Dimensional Models of the Distal Femur Created from Surgical Navigation Point Cloud Data T2 - 15th Annual Meeting of the International Society for Computer Assisted Orthopaedic Surgery (CAOS) Y1 - 2015 ER - TY - JOUR A1 - Grewe, Carl Martin A1 - Schreiber, Lisa A1 - Zachow, Stefan T1 - Fast and Accurate Digital Morphometry of Facial Expressions JF - Facial Plastic Surgery Y1 - 2015 U6 - https://doi.org/10.1055/s-0035-1564720 VL - 31 IS - 05 SP - 431 EP - 438 PB - Thieme Medical Publishers CY - New York ER - TY - GEN A1 - Grewe, Carl Martin A1 - Lamecker, Hans A1 - Zachow, Stefan ED - Hermanussen, Michael T1 - Landmark-based Statistical Shape Analysis T2 - Auxology - Studying Human Growth and Development url Y1 - 2013 UR - http://www.schweizerbart.de/publications/detail/isbn/9783510652785 SP - 199 EP - 201 PB - Schweizerbart Verlag, Stuttgart ER - TY - GEN A1 - Wilson, David A1 - Bücher, Pia A1 - Grewe, Carl Martin A1 - Mocanu, Valentin A1 - Anglin, Carolyn A1 - Zachow, Stefan A1 - Dunbar, Michael T1 - Validation of Three Dimensional Models of the Distal Femur Created from Surgical Navigation Data T2 - Orthopedic Research Society Annual Meeting Y1 - 2015 CY - Las Vegas, Nevada ER - TY - GEN A1 - Grewe, Carl Martin A1 - Lamecker, Hans A1 - Zachow, Stefan T1 - Digital morphometry: The Potential of Statistical Shape Models T2 - Anthropologischer Anzeiger. Journal of Biological and Clinical Anthropology Y1 - 2011 SP - 506 EP - 506 ER - TY - GEN A1 - Ehlke, Moritz A1 - Heyland, Mark A1 - Märdian, Sven A1 - Duda, Georg A1 - Zachow, Stefan T1 - Assessing the Relative Positioning of an Osteosynthesis Plate to the Patient-Specific Femoral Shape from Plain 2D Radiographs N2 - We present a novel method to derive the surface distance of an osteosynthesis plate w.r.t. the patient­specific surface of the distal femur based on 2D X­ray images. Our goal is to study from clinical data, how the plate­to­bone distance affects bone healing. The patient­specific 3D shape of the femur is, however, seldom recorded for cases of femoral osteosynthesis since this typically requires Computed Tomography (CT), which comes at high cost and radiation dose. Our method instead utilizes two postoperative X­ray images to derive the femoral shape and thus can be applied on radiographs that are taken in clinical routine for follow­up. First, the implant geometry is used as a calibration object to relate the implant and the individual X­ray images spatially in a virtual X­ray setup. In a second step, the patient­specific femoral shape and pose are reconstructed in the virtual setup by fitting a deformable statistical shape and intensity model (SSIM) to the images. The relative positioning between femur and implant is then assessed in terms of displacement between the reconstructed 3D shape of the femur and the plate. A preliminary evaluation based on 4 cadaver datasets shows that the method derives the plate­to­bone distance with a mean absolute error of less than 1mm and a maximum error of 4.7 mm compared to ground truth from CT. We believe that the approach presented in this paper constitutes a meaningful tool to elucidate the effect of implant positioning on fracture healing. T3 - ZIB-Report - 15-21 KW - 3d-­reconstruction from 2d X­rays KW - statistical shape and intensity models KW - fracture fixation of the distal femur KW - pose estimation Y1 - 2015 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-54268 SN - 1438-0064 ER - TY - JOUR A1 - Taylor, William R. A1 - Pöpplau, Berry M. A1 - König, Christian A1 - Ehrig, Rainald A1 - Zachow, Stefan A1 - Duda, Georg A1 - Heller, Markus O. T1 - The medial-lateral force distribution in the ovine stifle joint during walking JF - Journal of Orthopaedic Research Y1 - 2011 U6 - https://doi.org/10.1002/jor.21254 VL - 29 IS - 4 SP - 567 EP - 571 ER - TY - GEN A1 - Grewe, C. Martin A1 - Zachow, Stefan T1 - Release of the FexMM for the Open Virtual Mirror Framework N2 - THIS MODEL IS FOR NON-COMMERCIAL RESEARCH PURPOSES. ONLY MEMBERS OF UNIVERSITIES OR NON-COMMERCIAL RESEARCH INSTITUTES ARE ELIGIBLE TO APPLY. 1. Download, fill, and sign the form available from: https://media.githubusercontent.com/media/mgrewe/ovmf/main/data/fexmm_license_agreement.pdf 2. Send the signed form to: fexmm@zib.de NOTE: Use an official email address of your institution for the request. Y1 - 2021 U6 - https://doi.org/10.12752/8532 ER - TY - JOUR A1 - Grewe, Carl Martin A1 - Liu, Tuo A1 - Hildebrandt, Andrea A1 - Zachow, Stefan T1 - The Open Virtual Mirror Framework for Enfacement Illusions - Enhancing the Sense of Agency With Avatars That Imitate Facial Expressions JF - Behavior Research Methods Y1 - 2022 U6 - https://doi.org/10.3758/s13428-021-01761-9 PB - Springer ER - TY - JOUR A1 - Grewe, Carl Martin A1 - Liu, Tuo A1 - Kahl, Christoph A1 - Andrea, Hildebrandt A1 - Zachow, Stefan T1 - Statistical Learning of Facial Expressions Improves Realism of Animated Avatar Faces JF - Frontiers in Virtual Reality Y1 - 2021 U6 - https://doi.org/10.3389/frvir.2021.619811 VL - 2 SP - 1 EP - 13 PB - Frontiers ER - TY - GEN A1 - Grewe, Carl Martin A1 - Le Roux, Gabriel A1 - Pilz, Sven-Kristofer A1 - Zachow, Stefan T1 - Spotting the Details: The Various Facets of Facial Expressions N2 - 3D Morphable Models (MM) are a popular tool for analysis and synthesis of facial expressions. They represent plausible variations in facial shape and appearance within a low-dimensional parameter space. Fitted to a face scan, the model's parameters compactly encode its expression patterns. This expression code can be used, for instance, as a feature in automatic facial expression recognition. For accurate classification, an MM that can adequately represent the various characteristic facets and variants of each expression is necessary. Currently available MMs are limited in the diversity of expression patterns. We present a novel high-quality Facial Expression Morphable Model built from a large-scale face database as a tool for expression analysis and synthesis. Establishment of accurate dense correspondence, up to finest skin features, enables a detailed statistical analysis of facial expressions. Various characteristic shape patterns are identified for each expression. The results of our analysis give rise to a new facial expression code. We demonstrate the advantages of such a code for the automatic recognition of expressions, and compare the accuracy of our classifier to state-of-the-art. T3 - ZIB-Report - 18-06 Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-67696 SN - 1438-0064 ER - TY - CHAP A1 - Grewe, Carl Martin A1 - le Roux, Gabriel A1 - Pilz, Sven-Kristofer A1 - Zachow, Stefan T1 - Spotting the Details: The Various Facets of Facial Expressions T2 - IEEE International Conference on Automatic Face and Gesture Recognition Y1 - 2018 U6 - https://doi.org/10.1109/FG.2018.00049 SP - 286 EP - 293 ER - TY - CHAP A1 - Siqueira Rodrigues, Lucas A1 - Riehm, Felix A1 - Zachow, Stefan A1 - Israel, Johann Habakuk T1 - VoxSculpt: An Open-Source Voxel Library for Tomographic Volume Sculpting in Virtual Reality T2 - 2023 9th International Conference on Virtual Reality (ICVR), Xianyang, China, 2023 N2 - Manual processing of tomographic data volumes, such as interactive image segmentation in medicine or paleontology, is considered a time-consuming and cumbersome endeavor. Immersive volume sculpting stands as a potential solution to improve its efficiency and intuitiveness. However, current open-source software solutions do not yield the required performance and functionalities. We address this issue by contributing a novel open-source game engine voxel library that supports real-time immersive volume sculpting. Our design leverages GPU instancing, parallel computing, and a chunk-based data structure to optimize collision detection and rendering. We have implemented features that enable fast voxel interaction and improve precision. Our benchmark evaluation indicates that our implementation offers a significant improvement over the state-of-the-art and can render and modify millions of visible voxels while maintaining stable performance for real-time interaction in virtual reality. Y1 - 2023 U6 - https://doi.org/10.1109/ICVR57957.2023.10169420 SP - 515 EP - 523 ER - TY - JOUR A1 - Wagendorf, Oliver A1 - Nahles, Susanne A1 - Vach, Kirstin A1 - Kernen, Florian A1 - Zachow, Stefan A1 - Heiland, Max A1 - Flügge, Tabea T1 - The impact of teeth and dental restorations on gray value distribution in cone-beam computer tomography - a pilot study JF - International Journal of Implant Dentistry N2 - Purpose: To investigate the influence of teeth and dental restorations on the facial skeleton's gray value distributions in cone-beam computed tomography (CBCT). Methods: Gray value selection for the upper and lower jaw segmentation was performed in 40 patients. In total, CBCT data of 20 maxillae and 20 mandibles, ten partial edentulous and ten fully edentulous in each jaw, respectively, were evaluated using two different gray value selection procedures: manual lower threshold selection and automated lower threshold selection. Two sample t tests, linear regression models, linear mixed models, and Pearson's correlation coefficients were computed to evaluate the influence of teeth, dental restorations, and threshold selection procedures on gray value distributions. Results: Manual threshold selection resulted in significantly different gray values in the fully and partially edentulous mandible. (p = 0.015, difference 123). In automated threshold selection, only tendencies to different gray values in fully edentulous compared to partially edentulous jaws were observed (difference: 58–75). Significantly different gray values were evaluated for threshold selection approaches, independent of the dental situation of the analyzed jaw. No significant correlation between the number of teeth and gray values was assessed, but a trend towards higher gray values in patients with more teeth was noted. Conclusions: Standard gray values derived from CT imaging do not apply for threshold-based bone segmentation in CBCT. Teeth influence gray values and segmentation results. Inaccurate bone segmentation may result in ill-fitting surgical guides produced on CBCT data and misinterpreting bone density, which is crucial for selecting surgical protocols. Y1 - 2023 U6 - https://doi.org/10.1186/s40729-023-00493-z VL - 9 IS - 27 ER - TY - GEN A1 - Ehlke, Moritz A1 - Heyland, Mark A1 - Märdian, Sven A1 - Duda, Georg A1 - Zachow, Stefan T1 - 3D Assessment of Osteosynthesis based on 2D Radiographs N2 - We present a novel method to derive the surface distance of an osteosynthesis plate w.r.t. the patient-specific surface of the distal femur based on postoperative 2D radiographs. In a first step, the implant geometry is used as a calibration object to relate the implant and the individual X-ray images spatially in a virtual X-ray setup. Second, the patient-specific femoral shape and pose are reconstructed by fitting a deformable statistical shape and intensity model (SSIM) to the X-rays. The relative positioning between femur and implant is then assessed in terms of the displacement between the reconstructed 3D shape of the femur and the plate. We believe that the approach presented in this paper constitutes a meaningful tool to elucidate the effect of implant positioning on fracture healing and, ultimately, to derive load recommendations after surgery. T3 - ZIB-Report - 15-47 KW - 3d-reconstruction from 2d X-rays KW - statistical shape and intensity models KW - osteosynthesis follow-up Y1 - 2015 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-56217 SN - 1438-0064 ER - TY - CHAP A1 - Krämer, Martin A1 - Herrmann, Karl-Heinz A1 - Boeth, Heide A1 - Tycowicz, Christoph von A1 - König, Christian A1 - Zachow, Stefan A1 - Ehrig, Rainald A1 - Hege, Hans-Christian A1 - Duda, Georg A1 - Reichenbach, Jürgen T1 - Measuring 3D knee dynamics using center out radial ultra-short echo time trajectories with a low cost experimental setup T2 - ISMRM (International Society for Magnetic Resonance in Medicine), 23rd Annual Meeting 2015, Toronto, Canada Y1 - 2015 ER - TY - CHAP A1 - Ehlke, Moritz A1 - Heyland, Mark A1 - Märdian, Sven A1 - Duda, Georg A1 - Zachow, Stefan T1 - Assessing the relative positioning of an osteosynthesis plate to the patient-specific femoral shape from plain 2D radiographs T2 - Proceedings of the 15th Annual Meeting of CAOS-International (CAOS) N2 - We present a novel method to derive the surface distance of an osteosynthesis plate w.r.t. the patient­specific surface of the distal femur based on 2D X­ray images. Our goal is to study from clinical data, how the plate­to­bone distance affects bone healing. The patient­specific 3D shape of the femur is, however, seldom recorded for cases of femoral osteosynthesis since this typically requires Computed Tomography (CT), which comes at high cost and radiation dose. Our method instead utilizes two postoperative X­ray images to derive the femoral shape and thus can be applied on radiographs that are taken in clinical routine for follow­up. First, the implant geometry is used as a calibration object to relate the implant and the individual X­ray images spatially in a virtual X­ray setup. In a second step, the patient­specific femoral shape and pose are reconstructed in the virtual setup by fitting a deformable statistical shape and intensity model (SSIM) to the images. The relative positioning between femur and implant is then assessed in terms of displacement between the reconstructed 3D shape of the femur and the plate. A preliminary evaluation based on 4 cadaver datasets shows that the method derives the plate­to­bone distance with a mean absolute error of less than 1mm and a maximum error of 4.7 mm compared to ground truth from CT. We believe that the approach presented in this paper constitutes a meaningful tool to elucidate the effect of implant positioning on fracture healing. KW - 3d-­reconstruction from 2d X­rays KW - statistical shape and intensity models KW - fracture fixation of the distal femur KW - pose estimation Y1 - 2015 ER - TY - CHAP A1 - Ehlke, Moritz A1 - Heyland, Mark A1 - Märdian, Sven A1 - Duda, Georg A1 - Zachow, Stefan T1 - 3D Assessment of Osteosynthesis based on 2D Radiographs T2 - Proceedings of the Jahrestagung der Deutschen Gesellschaft für Computer- und Roboterassistierte Chirurgie (CURAC) N2 - We present a novel method to derive the surface distance of an osteosynthesis plate w.r.t. the patient-specific surface of the distal femur based on postoperative 2D radiographs. In a first step, the implant geometry is used as a calibration object to relate the implant and the individual X-ray images spatially in a virtual X-ray setup. Second, the patient- specific femoral shape and pose are reconstructed by fitting a deformable statistical shape and intensity model (SSIM) to the X-rays. The relative positioning between femur and implant is then assessed in terms of the displacement between the reconstructed 3D shape of the femur and the plate. We believe that the approach presented in this paper constitutes a meaningful tool to elucidate the effect of implant positioning on fracture healing and, ultimately, to derive load recommendations after surgery. KW - 3d-reconstruction from 2d X-rays KW - osteosynthesis follow-up KW - statistical shape and intensity models Y1 - 2015 SP - 317 EP - 321 ER - TY - CHAP A1 - Krämer, Martin A1 - Maggioni, Marta A1 - Tycowicz, Christoph von A1 - Brisson, Nick A1 - Zachow, Stefan A1 - Duda, Georg A1 - Reichenbach, Jürgen T1 - Ultra-short echo-time (UTE) imaging of the knee with curved surface reconstruction-based extraction of the patellar tendon T2 - ISMRM (International Society for Magnetic Resonance in Medicine), 26th Annual Meeting 2018, Paris, France N2 - Due to very short T2 relaxation times, imaging of tendons is typically performed using ultra-short echo-time (UTE) acquisition techniques. In this work, we combined an echo-train shifted multi-echo 3D UTE imaging sequence with a 3D curved surface reconstruction to virtually extract the patellar tendon from an acquired 3D UTE dataset. Based on the analysis of the acquired multi-echo data, a T2* relaxation time parameter map was calculated and interpolated to the curved surface of the patellar tendon. Y1 - 2018 ER - TY - CHAP A1 - Siqueira Rodrigues, Lucas A1 - Nyakatura, John A1 - Zachow, Stefan A1 - Israel, Johann Habakuk T1 - Design Challenges and Opportunities of Fossil Preparation Tools and Methods T2 - Proceedings of the 20th International Conference on Culture and Computer Science: Code and Materiality N2 - Fossil preparation is the activity of processing paleontological specimens for research and exhibition purposes. In addition to traditional mechanical extraction of fossils, preparation presently comprises non-destructive digital methods that are part of a relatively new field, namely virtual paleontology. Despite significant technological advances, both traditional and digital preparation remain cumbersome and time-consuming endeavors. However, this field has received scarce attention from a human-computer interaction perspective. The present study aims to elucidate the state-of-the-art for paleontological fossil preparation in order to determine its main challenges and start a conversation regarding opportunities for creating novel designs that tackle the field's current issues. We conducted a qualitative study involving both technical preparators and virtual paleontologists. The study was divided into two parts: First, we assembled technical preparators and paleontology researchers in a focus group session to discuss their workflows, obtain a preliminary understanding of their issues, and ideate solutions based on their counterparts' workflows. Next, we conducted a series of contextual inquiries involving direct observation and semi-structured in-depth interviews. We transcribed our recordings and examined the data through theoretical and inductive thematic analysis, clustering emerging themes and applying concepts from human-computer interaction and related fields. Our findings report on challenges faced by traditional and digital fossil preparators and potential opportunities to improve their tools and workflows. We contribute with a novel analysis of fossil preparation from an HCI perspective. Y1 - 2023 U6 - https://doi.org/10.1145/3623462.3623470 PB - Association for Computing Machinery CY - New York, NY, USA ER - TY - JOUR A1 - Amiranashvili, Tamaz A1 - Lüdke, David A1 - Li, Hongwei Bran A1 - Zachow, Stefan A1 - Menze, Bjoern T1 - Learning continuous shape priors from sparse data with neural implicit functions JF - Medical Image Analysis N2 - Statistical shape models are an essential tool for various tasks in medical image analysis, including shape generation, reconstruction and classification. Shape models are learned from a population of example shapes, which are typically obtained through segmentation of volumetric medical images. In clinical practice, highly anisotropic volumetric scans with large slice distances are prevalent, e.g., to reduce radiation exposure in CT or image acquisition time in MR imaging. For existing shape modeling approaches, the resolution of the emerging model is limited to the resolution of the training shapes. Therefore, any missing information between slices prohibits existing methods from learning a high-resolution shape prior. We propose a novel shape modeling approach that can be trained on sparse, binary segmentation masks with large slice distances. This is achieved through employing continuous shape representations based on neural implicit functions. After training, our model can reconstruct shapes from various sparse inputs at high target resolutions beyond the resolution of individual training examples. We successfully reconstruct high-resolution shapes from as few as three orthogonal slices. Furthermore, our shape model allows us to embed various sparse segmentation masks into a common, low-dimensional latent space — independent of the acquisition direction, resolution, spacing, and field of view. We show that the emerging latent representation discriminates between healthy and pathological shapes, even when provided with sparse segmentation masks. Lastly, we qualitatively demonstrate that the emerging latent space is smooth and captures characteristic modes of shape variation. We evaluate our shape model on two anatomical structures: the lumbar vertebra and the distal femur, both from publicly available datasets. Y1 - 2024 U6 - https://doi.org/10.1016/j.media.2024.103099 VL - 94 SP - 103099 ER -