TY - GEN A1 - Tack, Alexander A1 - Shestakov, Alexey A1 - Lüdke, David A1 - Zachow, Stefan T1 - A deep multi-task learning method for detection of meniscal tears in MRI data from the Osteoarthritis Initiative database N2 - We present a novel and computationally efficient method for the detection of meniscal tears in Magnetic Resonance Imaging (MRI) data. Our method is based on a Convolutional Neural Network (CNN) that operates on a complete 3D MRI scan. Our approach detects the presence of meniscal tears in three anatomical sub-regions (anterior horn, meniscal body, posterior horn) for both the Medial Meniscus (MM) and the Lateral Meniscus (LM) individually. For optimal performance of our method, we investigate how to preprocess the MRI data or how to train the CNN such that only relevant information within a Region of Interest (RoI) of the data volume is taken into account for meniscal tear detection. We propose meniscal tear detection combined with a bounding box regressor in a multi-task deep learning framework to let the CNN implicitly consider the corresponding RoIs of the menisci. We evaluate the accuracy of our CNN-based meniscal tear detection approach on 2,399 Double Echo Steady-State (DESS) MRI scans from the Osteoarthritis Initiative database. In addition, to show that our method is capable of generalizing to other MRI sequences, we also adapt our model to Intermediate-Weighted Turbo Spin-Echo (IW TSE) MRI scans. To judge the quality of our approaches, Receiver Operating Characteristic (ROC) curves and Area Under the Curve (AUC) values are evaluated for both MRI sequences. For the detection of tears in DESS MRI, our method reaches AUC values of 0.94, 0.93, 0.93 (anterior horn, body, posterior horn) in MM and 0.96, 0.94, 0.91 in LM. For the detection of tears in IW TSE MRI data, our method yields AUC values of 0.84, 0.88, 0.86 in MM and 0.95, 0.91, 0.90 in LM. In conclusion, the presented method achieves high accuracy for detecting meniscal tears in both DESS and IW TSE MRI data. Furthermore, our method can be easily trained and applied to other MRI sequences. T3 - ZIB-Report - 21-33 Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-84415 SN - 1438-0064 ER - TY - JOUR A1 - Tack, Alexander A1 - Shestakov, Alexey A1 - Lüdke, David A1 - Zachow, Stefan T1 - A deep multi-task learning method for detection of meniscal tears in MRI data from the Osteoarthritis Initiative database JF - Frontiers in Bioengineering and Biotechnology, section Biomechanics N2 - We present a novel and computationally efficient method for the detection of meniscal tears in Magnetic Resonance Imaging (MRI) data. Our method is based on a Convolutional Neural Network (CNN) that operates on a complete 3D MRI scan. Our approach detects the presence of meniscal tears in three anatomical sub-regions (anterior horn, meniscal body, posterior horn) for both the Medial Meniscus (MM) and the Lateral Meniscus (LM) individually. For optimal performance of our method, we investigate how to preprocess the MRI data or how to train the CNN such that only relevant information within a Region of Interest (RoI) of the data volume is taken into account for meniscal tear detection. We propose meniscal tear detection combined with a bounding box regressor in a multi-task deep learning framework to let the CNN implicitly consider the corresponding RoIs of the menisci. We evaluate the accuracy of our CNN-based meniscal tear detection approach on 2,399 Double Echo Steady-State (DESS) MRI scans from the Osteoarthritis Initiative database. In addition, to show that our method is capable of generalizing to other MRI sequences, we also adapt our model to Intermediate-Weighted Turbo Spin-Echo (IW TSE) MRI scans. To judge the quality of our approaches, Receiver Operating Characteristic (ROC) curves and Area Under the Curve (AUC) values are evaluated for both MRI sequences. For the detection of tears in DESS MRI, our method reaches AUC values of 0.94, 0.93, 0.93 (anterior horn, body, posterior horn) in MM and 0.96, 0.94, 0.91 in LM. For the detection of tears in IW TSE MRI data, our method yields AUC values of 0.84, 0.88, 0.86 in MM and 0.95, 0.91, 0.90 in LM. In conclusion, the presented method achieves high accuracy for detecting meniscal tears in both DESS and IW TSE MRI data. Furthermore, our method can be easily trained and applied to other MRI sequences. Y1 - 2021 U6 - https://doi.org/10.3389/fbioe.2021.747217 SP - 28 EP - 41 ER - TY - CHAP A1 - Lüdke, David A1 - Amiranashvili, Tamaz A1 - Ambellan, Felix A1 - Ezhov, Ivan A1 - Menze, Bjoern A1 - Zachow, Stefan T1 - Landmark-free Statistical Shape Modeling via Neural Flow Deformations T2 - Medical Image Computing and Computer Assisted Intervention - MICCAI 2022 N2 - Statistical shape modeling aims at capturing shape variations of an anatomical structure that occur within a given population. Shape models are employed in many tasks, such as shape reconstruction and image segmentation, but also shape generation and classification. Existing shape priors either require dense correspondence between training examples or lack robustness and topological guarantees. We present FlowSSM, a novel shape modeling approach that learns shape variability without requiring dense correspondence between training instances. It relies on a hierarchy of continuous deformation flows, which are parametrized by a neural network. Our model outperforms state-of-the-art methods in providing an expressive and robust shape prior for distal femur and liver. We show that the emerging latent representation is discriminative by separating healthy from pathological shapes. Ultimately, we demonstrate its effectiveness on two shape reconstruction tasks from partial data. Our source code is publicly available (https://github.com/davecasp/flowssm). Y1 - 2022 U6 - https://doi.org/10.1007/978-3-031-16434-7_44 VL - 13432 PB - Springer, Cham ER - TY - JOUR A1 - Tack, Alexander A1 - Ambellan, Felix A1 - Zachow, Stefan T1 - Towards novel osteoarthritis biomarkers: Multi-criteria evaluation of 46,996 segmented knee MRI data from the Osteoarthritis Initiative JF - PLOS One N2 - Convolutional neural networks (CNNs) are the state-of-the-art for automated assessment of knee osteoarthritis (KOA) from medical image data. However, these methods lack interpretability, mainly focus on image texture, and cannot completely grasp the analyzed anatomies’ shapes. In this study we assess the informative value of quantitative features derived from segmentations in order to assess their potential as an alternative or extension to CNN-based approaches regarding multiple aspects of KOA. Six anatomical structures around the knee (femoral and tibial bones, femoral and tibial cartilages, and both menisci) are segmented in 46,996 MRI scans. Based on these segmentations, quantitative features are computed, i.e., measurements such as cartilage volume, meniscal extrusion and tibial coverage, as well as geometric features based on a statistical shape encoding of the anatomies. The feature quality is assessed by investigating their association to the Kellgren-Lawrence grade (KLG), joint space narrowing (JSN), incident KOA, and total knee replacement (TKR). Using gold standard labels from the Osteoarthritis Initiative database the balanced accuracy (BA), the area under the Receiver Operating Characteristic curve (AUC), and weighted kappa statistics are evaluated. Features based on shape encodings of femur, tibia, and menisci plus the performed measurements showed most potential as KOA biomarkers. Differentiation between non-arthritic and severely arthritic knees yielded BAs of up to 99%, 84% were achieved for diagnosis of early KOA. Weighted kappa values of 0.73, 0.72, and 0.78 were achieved for classification of the grade of medial JSN, lateral JSN, and KLG, respectively. The AUC was 0.61 and 0.76 for prediction of incident KOA and TKR within one year, respectively. Quantitative features from automated segmentations provide novel biomarkers for KLG and JSN classification and show potential for incident KOA and TKR prediction. The validity of these features should be further evaluated, especially as extensions of CNN- based approaches. To foster such developments we make all segmentations publicly available together with this publication. Y1 - 2021 U6 - https://doi.org/10.1371/journal.pone.0258855 VL - 16 IS - 10 ER - TY - CHAP A1 - Siqueira Rodrigues, Lucas A1 - Nyakatura, John A1 - Zachow, Stefan A1 - Israel, Johann Habakuk T1 - An Immersive Virtual Paleontology Application T2 - 13th International Conference on Human Haptic Sensing and Touch Enabled Computer Applications, EuroHaptics 2022 N2 - Virtual paleontology studies digital fossils through data analysis and visualization systems. The discipline is growing in relevance for the evident advantages of non-destructive imaging techniques over traditional paleontological methods, and it has made significant advancements during the last few decades. However, virtual paleontology still faces a number of technological challenges, amongst which are interaction shortcomings of image segmentation applications. Whereas automated segmentation methods are seldom applicable to fossil datasets, manual exploration of these specimens is extremely time-consuming as it impractically delves into three-dimensional data through two-dimensional visualization and interaction means. This paper presents an application that employs virtual reality and haptics to virtual paleontology in order to evolve its interaction paradigms and address some of its limitations. We provide a brief overview of the challenges faced by virtual paleontology practitioners, a description of our immersive virtual paleontology prototype, and the results of a heuristic evaluation of our design. Y1 - 2022 U6 - https://doi.org/10.1007/978-3-031-06249-0 SP - 478 EP - 481 ER - TY - JOUR A1 - Sekuboyina, Anjany A1 - Husseini, Malek E. A1 - Bayat, Amirhossein A1 - Löffler, Maximilian A1 - Liebl, Hans A1 - Li, Hongwei A1 - Tetteh, Giles A1 - Kukačka, Jan A1 - Payer, Christian A1 - Štern, Darko A1 - Urschler, Martin A1 - Chen, Maodong A1 - Cheng, Dalong A1 - Lessmann, Nikolas A1 - Hu, Yujin A1 - Wang, Tianfu A1 - Yang, Dong A1 - Xu, Daguang A1 - Ambellan, Felix A1 - Amiranashvili, Tamaz A1 - Ehlke, Moritz A1 - Lamecker, Hans A1 - Lehnert, Sebastian A1 - Lirio, Marilia A1 - de Olaguer, Nicolás Pérez A1 - Ramm, Heiko A1 - Sahu, Manish A1 - Tack, Alexander A1 - Zachow, Stefan A1 - Jiang, Tao A1 - Ma, Xinjun A1 - Angerman, Christoph A1 - Wang, Xin A1 - Brown, Kevin A1 - Kirszenberg, Alexandre A1 - Puybareau, Élodie A1 - Chen, Di A1 - Bai, Yiwei A1 - Rapazzo, Brandon H. A1 - Yeah, Timyoas A1 - Zhang, Amber A1 - Xu, Shangliang A1 - Hou, Feng A1 - He, Zhiqiang A1 - Zeng, Chan A1 - Xiangshang, Zheng A1 - Liming, Xu A1 - Netherton, Tucker J. A1 - Mumme, Raymond P. A1 - Court, Laurence E. A1 - Huang, Zixun A1 - He, Chenhang A1 - Wang, Li-Wen A1 - Ling, Sai Ho A1 - Huynh, Lê Duy A1 - Boutry, Nicolas A1 - Jakubicek, Roman A1 - Chmelik, Jiri A1 - Mulay, Supriti A1 - Sivaprakasam, Mohanasankar A1 - Paetzold, Johannes C. A1 - Shit, Suprosanna A1 - Ezhov, Ivan A1 - Wiestler, Benedikt A1 - Glocker, Ben A1 - Valentinitsch, Alexander A1 - Rempfler, Markus A1 - Menze, Björn H. A1 - Kirschke, Jan S. T1 - VerSe: A Vertebrae labelling and segmentation benchmark for multi-detector CT images JF - Medical Image Analysis N2 - Vertebral labelling and segmentation are two fundamental tasks in an automated spine processing pipeline. Reliable and accurate processing of spine images is expected to benefit clinical decision support systems for diagnosis, surgery planning, and population-based analysis of spine and bone health. However, designing automated algorithms for spine processing is challenging predominantly due to considerable variations in anatomy and acquisition protocols and due to a severe shortage of publicly available data. Addressing these limitations, the Large Scale Vertebrae Segmentation Challenge (VerSe) was organised in conjunction with the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) in 2019 and 2020, with a call for algorithms tackling the labelling and segmentation of vertebrae. Two datasets containing a total of 374 multi-detector CT scans from 355 patients were prepared and 4505 vertebrae have individually been annotated at voxel level by a human-machine hybrid algorithm (https://osf.io/nqjyw/, https://osf.io/t98fz/). A total of 25 algorithms were benchmarked on these datasets. In this work, we present the results of this evaluation and further investigate the performance variation at the vertebra level, scan level, and different fields of view. We also evaluate the generalisability of the approaches to an implicit domain shift in data by evaluating the top-performing algorithms of one challenge iteration on data from the other iteration. The principal takeaway from VerSe: the performance of an algorithm in labelling and segmenting a spine scan hinges on its ability to correctly identify vertebrae in cases of rare anatomical variations. The VerSe content and code can be accessed at: https://github.com/anjany/verse. Y1 - 2021 U6 - https://doi.org/10.1016/j.media.2021.102166 VL - 73 ER - TY - JOUR A1 - Glatzeder, Korbinian A1 - Komnik, Igor A1 - Ambellan, Felix A1 - Zachow, Stefan A1 - Potthast, Wolfgang T1 - Dynamic pressure analysis of novel interpositional knee spacer implants in 3D-printed human knee models JF - Scientific Reports N2 - Alternative treatment methods for knee osteoarthritis (OA) are in demand, to delay the young (< 50 Years) patient’s need for osteotomy or knee replacement. Novel interpositional knee spacers shape based on statistical shape model (SSM) approach and made of polyurethane (PU) were developed to present a minimally invasive method to treat medial OA in the knee. The implant should be supposed to reduce peak strains and pain, restore the stability of the knee, correct the malalignment of a varus knee and improve joint function and gait. Firstly, the spacers were tested in artificial knee models. It is assumed that by application of a spacer, a significant reduction in stress values and a significant increase in the contact area in the medial compartment of the knee will be registered. Biomechanical analysis of the effect of novel interpositional knee spacer implants on pressure distribution in 3D-printed knee model replicas: the primary purpose was the medial joint contact stress-related biomechanics. A secondary purpose was a better understanding of medial/lateral redistribution of joint loading. Six 3D printed knee models were reproduced from cadaveric leg computed tomography. Each of four spacer implants was tested in each knee geometry under realistic arthrokinematic dynamic loading conditions, to examine the pressure distribution in the knee joint. All spacers showed reduced mean stress values by 84–88% and peak stress values by 524–704% in the medial knee joint compartment compared to the non-spacer test condition. The contact area was enlarged by 462–627% as a result of the inserted spacers. Concerning the appreciable contact stress reduction and enlargement of the contact area in the medial knee joint compartment, the premises are in place for testing the implants directly on human knee cadavers to gain further insights into a possible tool for treating medial knee osteoarthritis. Y1 - 2022 U6 - https://doi.org/10.1038/s41598-022-20463-6 VL - 12 ER - TY - JOUR A1 - Sekuboyina, Anjany A1 - Bayat, Amirhossein A1 - Husseini, Malek E. A1 - Löffler, Maximilian A1 - Li, Hongwei A1 - Tetteh, Giles A1 - Kukačka, Jan A1 - Payer, Christian A1 - Štern, Darko A1 - Urschler, Martin A1 - Chen, Maodong A1 - Cheng, Dalong A1 - Lessmann, Nikolas A1 - Hu, Yujin A1 - Wang, Tianfu A1 - Yang, Dong A1 - Xu, Daguang A1 - Ambellan, Felix A1 - Amiranashvili, Tamaz A1 - Ehlke, Moritz A1 - Lamecker, Hans A1 - Lehnert, Sebastian A1 - Lirio, Marilia A1 - de Olaguer, Nicolás Pérez A1 - Ramm, Heiko A1 - Sahu, Manish A1 - Tack, Alexander A1 - Zachow, Stefan A1 - Jiang, Tao A1 - Ma, Xinjun A1 - Angerman, Christoph A1 - Wang, Xin A1 - Wei, Qingyue A1 - Brown, Kevin A1 - Wolf, Matthias A1 - Kirszenberg, Alexandre A1 - Puybareau, Élodie A1 - Valentinitsch, Alexander A1 - Rempfler, Markus A1 - Menze, Björn H. A1 - Kirschke, Jan S. T1 - VerSe: A Vertebrae Labelling and Segmentation Benchmark for Multi-detector CT Images JF - arXiv Y1 - 2020 ER - TY - CHAP A1 - Amiranashvili, Tamaz A1 - Lüdke, David A1 - Li, Hongwei A1 - Menze, Bjoern A1 - Zachow, Stefan T1 - Learning Shape Reconstruction from Sparse Measurements with Neural Implicit Functions T2 - Medical Imaging with Deep Learning N2 - Reconstructing anatomical shapes from sparse or partial measurements relies on prior knowledge of shape variations that occur within a given population. Such shape priors are learned from example shapes, obtained by segmenting volumetric medical images. For existing models, the resolution of a learned shape prior is limited to the resolution of the training data. However, in clinical practice, volumetric images are often acquired with highly anisotropic voxel sizes, e.g. to reduce image acquisition time in MRI or radiation exposure in CT imaging. The missing shape information between the slices prohibits existing methods to learn a high-resolution shape prior. We introduce a method for high-resolution shape reconstruction from sparse measurements without relying on high-resolution ground truth for training. Our method is based on neural implicit shape representations and learns a continuous shape prior only from highly anisotropic segmentations. Furthermore, it is able to learn from shapes with a varying field of view and can reconstruct from various sparse input configurations. We demonstrate its effectiveness on two anatomical structures: vertebra and femur, and successfully reconstruct high-resolution shapes from sparse segmentations, using as few as three orthogonal slices. Y1 - 2022 ER - TY - JOUR A1 - Wilson, David A1 - Anglin, Carolyn A1 - Ambellan, Felix A1 - Grewe, Carl Martin A1 - Tack, Alexander A1 - Lamecker, Hans A1 - Dunbar, Michael A1 - Zachow, Stefan T1 - Validation of three-dimensional models of the distal femur created from surgical navigation point cloud data for intraoperative and postoperative analysis of total knee arthroplasty JF - International Journal of Computer Assisted Radiology and Surgery N2 - Purpose: Despite the success of total knee arthroplasty there continues to be a significant proportion of patients who are dissatisfied. One explanation may be a shape mismatch between pre and post-operative distal femurs. The purpose of this study was to investigate a method to match a statistical shape model (SSM) to intra-operatively acquired point cloud data from a surgical navigation system, and to validate it against the pre-operative magnetic resonance imaging (MRI) data from the same patients. Methods: A total of 10 patients who underwent navigated total knee arthroplasty also had an MRI scan less than 2 months pre-operatively. The standard surgical protocol was followed which included partial digitization of the distal femur. Two different methods were employed to fit the SSM to the digitized point cloud data, based on (1) Iterative Closest Points (ICP) and (2) Gaussian Mixture Models (GMM). The available MRI data were manually segmented and the reconstructed three-dimensional surfaces used as ground truth against which the statistical shape model fit was compared. Results: For both approaches, the difference between the statistical shape model-generated femur and the surface generated from MRI segmentation averaged less than 1.7 mm, with maximum errors occurring in less clinically important areas. Conclusion: The results demonstrated good correspondence with the distal femoral morphology even in cases of sparse data sets. Application of this technique will allow for measurement of mismatch between pre and post-operative femurs retrospectively on any case done using the surgical navigation system and could be integrated into the surgical navigation unit to provide real-time feedback. Y1 - 2017 UR - https://link.springer.com/content/pdf/10.1007%2Fs11548-017-1630-5.pdf U6 - https://doi.org/10.1007/s11548-017-1630-5 VL - 12 IS - 12 SP - 2097 EP - 2105 PB - Springer ER - TY - GEN A1 - Grewe, Carl Martin A1 - Zachow, Stefan ED - Doll, Nikola ED - Bredekamp, Horst ED - Schäffner, Wolfgang T1 - Face to Face-Interface T2 - +ultra. Knowledge & Gestaltung Y1 - 2017 SP - 320 EP - 321 PB - Seemann Henschel ER - TY - CHAP A1 - Grewe, Carl Martin A1 - Zachow, Stefan T1 - Fully Automated and Highly Accurate Dense Correspondence for Facial Surfaces T2 - Computer Vision – ECCV 2016 Workshops N2 - We present a novel framework for fully automated and highly accurate determination of facial landmarks and dense correspondence, e.g. a topologically identical mesh of arbitrary resolution, across the entire surface of 3D face models. For robustness and reliability of the proposed approach, we are combining 2D landmark detectors and 3D statistical shape priors with a variational matching method. Instead of matching faces in the spatial domain only, we employ image registration to align the 2D parametrization of the facial surface to a planar template we call the Unified Facial Parameter Domain (ufpd). This allows us to simultaneously match salient photometric and geometric facial features using robust image similarity measures while reasonably constraining geometric distortion in regions with less significant features. We demonstrate the accuracy of the dense correspondence established by our framework on the BU3DFE database with 2500 facial surfaces and show, that our framework outperforms current state-of-the-art methods with respect to the fully automated location of facial landmarks. Y1 - 2016 U6 - https://doi.org/10.1007/978-3-319-48881-3_38 VL - 9914 SP - 552 EP - 568 PB - Springer International Publishing ER - TY - GEN A1 - Wilson, David A1 - Bücher, Pia A1 - Grewe, Carl Martin A1 - Anglin, Carolyn A1 - Zachow, Stefan A1 - Michael, Dunbar T1 - Validation of Three Dimensional Models of the Distal Femur Created from Surgical Navigation Point Cloud Data T2 - 15th Annual Meeting of the International Society for Computer Assisted Orthopaedic Surgery (CAOS) Y1 - 2015 ER - TY - JOUR A1 - Grewe, Carl Martin A1 - Schreiber, Lisa A1 - Zachow, Stefan T1 - Fast and Accurate Digital Morphometry of Facial Expressions JF - Facial Plastic Surgery Y1 - 2015 U6 - https://doi.org/10.1055/s-0035-1564720 VL - 31 IS - 05 SP - 431 EP - 438 PB - Thieme Medical Publishers CY - New York ER - TY - GEN A1 - Grewe, Carl Martin A1 - Lamecker, Hans A1 - Zachow, Stefan ED - Hermanussen, Michael T1 - Landmark-based Statistical Shape Analysis T2 - Auxology - Studying Human Growth and Development url Y1 - 2013 UR - http://www.schweizerbart.de/publications/detail/isbn/9783510652785 SP - 199 EP - 201 PB - Schweizerbart Verlag, Stuttgart ER - TY - GEN A1 - Wilson, David A1 - Bücher, Pia A1 - Grewe, Carl Martin A1 - Mocanu, Valentin A1 - Anglin, Carolyn A1 - Zachow, Stefan A1 - Dunbar, Michael T1 - Validation of Three Dimensional Models of the Distal Femur Created from Surgical Navigation Data T2 - Orthopedic Research Society Annual Meeting Y1 - 2015 CY - Las Vegas, Nevada ER - TY - GEN A1 - Grewe, Carl Martin A1 - Lamecker, Hans A1 - Zachow, Stefan T1 - Digital morphometry: The Potential of Statistical Shape Models T2 - Anthropologischer Anzeiger. Journal of Biological and Clinical Anthropology Y1 - 2011 SP - 506 EP - 506 ER - TY - GEN A1 - Ehlke, Moritz A1 - Heyland, Mark A1 - Märdian, Sven A1 - Duda, Georg A1 - Zachow, Stefan T1 - Assessing the Relative Positioning of an Osteosynthesis Plate to the Patient-Specific Femoral Shape from Plain 2D Radiographs N2 - We present a novel method to derive the surface distance of an osteosynthesis plate w.r.t. the patient­specific surface of the distal femur based on 2D X­ray images. Our goal is to study from clinical data, how the plate­to­bone distance affects bone healing. The patient­specific 3D shape of the femur is, however, seldom recorded for cases of femoral osteosynthesis since this typically requires Computed Tomography (CT), which comes at high cost and radiation dose. Our method instead utilizes two postoperative X­ray images to derive the femoral shape and thus can be applied on radiographs that are taken in clinical routine for follow­up. First, the implant geometry is used as a calibration object to relate the implant and the individual X­ray images spatially in a virtual X­ray setup. In a second step, the patient­specific femoral shape and pose are reconstructed in the virtual setup by fitting a deformable statistical shape and intensity model (SSIM) to the images. The relative positioning between femur and implant is then assessed in terms of displacement between the reconstructed 3D shape of the femur and the plate. A preliminary evaluation based on 4 cadaver datasets shows that the method derives the plate­to­bone distance with a mean absolute error of less than 1mm and a maximum error of 4.7 mm compared to ground truth from CT. We believe that the approach presented in this paper constitutes a meaningful tool to elucidate the effect of implant positioning on fracture healing. T3 - ZIB-Report - 15-21 KW - 3d-­reconstruction from 2d X­rays KW - statistical shape and intensity models KW - fracture fixation of the distal femur KW - pose estimation Y1 - 2015 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-54268 SN - 1438-0064 ER - TY - JOUR A1 - Taylor, William R. A1 - Pöpplau, Berry M. A1 - König, Christian A1 - Ehrig, Rainald A1 - Zachow, Stefan A1 - Duda, Georg A1 - Heller, Markus O. T1 - The medial-lateral force distribution in the ovine stifle joint during walking JF - Journal of Orthopaedic Research Y1 - 2011 U6 - https://doi.org/10.1002/jor.21254 VL - 29 IS - 4 SP - 567 EP - 571 ER - TY - GEN A1 - Grewe, C. Martin A1 - Zachow, Stefan T1 - Release of the FexMM for the Open Virtual Mirror Framework N2 - THIS MODEL IS FOR NON-COMMERCIAL RESEARCH PURPOSES. ONLY MEMBERS OF UNIVERSITIES OR NON-COMMERCIAL RESEARCH INSTITUTES ARE ELIGIBLE TO APPLY. 1. Download, fill, and sign the form available from: https://media.githubusercontent.com/media/mgrewe/ovmf/main/data/fexmm_license_agreement.pdf 2. Send the signed form to: fexmm@zib.de NOTE: Use an official email address of your institution for the request. Y1 - 2021 U6 - https://doi.org/10.12752/8532 ER - TY - JOUR A1 - Grewe, Carl Martin A1 - Liu, Tuo A1 - Hildebrandt, Andrea A1 - Zachow, Stefan T1 - The Open Virtual Mirror Framework for Enfacement Illusions - Enhancing the Sense of Agency With Avatars That Imitate Facial Expressions JF - Behavior Research Methods Y1 - 2022 U6 - https://doi.org/10.3758/s13428-021-01761-9 PB - Springer ER - TY - JOUR A1 - Grewe, Carl Martin A1 - Liu, Tuo A1 - Kahl, Christoph A1 - Andrea, Hildebrandt A1 - Zachow, Stefan T1 - Statistical Learning of Facial Expressions Improves Realism of Animated Avatar Faces JF - Frontiers in Virtual Reality Y1 - 2021 U6 - https://doi.org/10.3389/frvir.2021.619811 VL - 2 SP - 1 EP - 13 PB - Frontiers ER - TY - GEN A1 - Grewe, Carl Martin A1 - Le Roux, Gabriel A1 - Pilz, Sven-Kristofer A1 - Zachow, Stefan T1 - Spotting the Details: The Various Facets of Facial Expressions N2 - 3D Morphable Models (MM) are a popular tool for analysis and synthesis of facial expressions. They represent plausible variations in facial shape and appearance within a low-dimensional parameter space. Fitted to a face scan, the model's parameters compactly encode its expression patterns. This expression code can be used, for instance, as a feature in automatic facial expression recognition. For accurate classification, an MM that can adequately represent the various characteristic facets and variants of each expression is necessary. Currently available MMs are limited in the diversity of expression patterns. We present a novel high-quality Facial Expression Morphable Model built from a large-scale face database as a tool for expression analysis and synthesis. Establishment of accurate dense correspondence, up to finest skin features, enables a detailed statistical analysis of facial expressions. Various characteristic shape patterns are identified for each expression. The results of our analysis give rise to a new facial expression code. We demonstrate the advantages of such a code for the automatic recognition of expressions, and compare the accuracy of our classifier to state-of-the-art. T3 - ZIB-Report - 18-06 Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-67696 SN - 1438-0064 ER - TY - CHAP A1 - Grewe, Carl Martin A1 - le Roux, Gabriel A1 - Pilz, Sven-Kristofer A1 - Zachow, Stefan T1 - Spotting the Details: The Various Facets of Facial Expressions T2 - IEEE International Conference on Automatic Face and Gesture Recognition Y1 - 2018 U6 - https://doi.org/10.1109/FG.2018.00049 SP - 286 EP - 293 ER - TY - CHAP A1 - Siqueira Rodrigues, Lucas A1 - Riehm, Felix A1 - Zachow, Stefan A1 - Israel, Johann Habakuk T1 - VoxSculpt: An Open-Source Voxel Library for Tomographic Volume Sculpting in Virtual Reality T2 - 2023 9th International Conference on Virtual Reality (ICVR), Xianyang, China, 2023 N2 - Manual processing of tomographic data volumes, such as interactive image segmentation in medicine or paleontology, is considered a time-consuming and cumbersome endeavor. Immersive volume sculpting stands as a potential solution to improve its efficiency and intuitiveness. However, current open-source software solutions do not yield the required performance and functionalities. We address this issue by contributing a novel open-source game engine voxel library that supports real-time immersive volume sculpting. Our design leverages GPU instancing, parallel computing, and a chunk-based data structure to optimize collision detection and rendering. We have implemented features that enable fast voxel interaction and improve precision. Our benchmark evaluation indicates that our implementation offers a significant improvement over the state-of-the-art and can render and modify millions of visible voxels while maintaining stable performance for real-time interaction in virtual reality. Y1 - 2023 U6 - https://doi.org/10.1109/ICVR57957.2023.10169420 SP - 515 EP - 523 ER - TY - JOUR A1 - Wagendorf, Oliver A1 - Nahles, Susanne A1 - Vach, Kirstin A1 - Kernen, Florian A1 - Zachow, Stefan A1 - Heiland, Max A1 - Flügge, Tabea T1 - The impact of teeth and dental restorations on gray value distribution in cone-beam computer tomography - a pilot study JF - International Journal of Implant Dentistry N2 - Purpose: To investigate the influence of teeth and dental restorations on the facial skeleton's gray value distributions in cone-beam computed tomography (CBCT). Methods: Gray value selection for the upper and lower jaw segmentation was performed in 40 patients. In total, CBCT data of 20 maxillae and 20 mandibles, ten partial edentulous and ten fully edentulous in each jaw, respectively, were evaluated using two different gray value selection procedures: manual lower threshold selection and automated lower threshold selection. Two sample t tests, linear regression models, linear mixed models, and Pearson's correlation coefficients were computed to evaluate the influence of teeth, dental restorations, and threshold selection procedures on gray value distributions. Results: Manual threshold selection resulted in significantly different gray values in the fully and partially edentulous mandible. (p = 0.015, difference 123). In automated threshold selection, only tendencies to different gray values in fully edentulous compared to partially edentulous jaws were observed (difference: 58–75). Significantly different gray values were evaluated for threshold selection approaches, independent of the dental situation of the analyzed jaw. No significant correlation between the number of teeth and gray values was assessed, but a trend towards higher gray values in patients with more teeth was noted. Conclusions: Standard gray values derived from CT imaging do not apply for threshold-based bone segmentation in CBCT. Teeth influence gray values and segmentation results. Inaccurate bone segmentation may result in ill-fitting surgical guides produced on CBCT data and misinterpreting bone density, which is crucial for selecting surgical protocols. Y1 - 2023 U6 - https://doi.org/10.1186/s40729-023-00493-z VL - 9 IS - 27 ER - TY - GEN A1 - Ehlke, Moritz A1 - Heyland, Mark A1 - Märdian, Sven A1 - Duda, Georg A1 - Zachow, Stefan T1 - 3D Assessment of Osteosynthesis based on 2D Radiographs N2 - We present a novel method to derive the surface distance of an osteosynthesis plate w.r.t. the patient-specific surface of the distal femur based on postoperative 2D radiographs. In a first step, the implant geometry is used as a calibration object to relate the implant and the individual X-ray images spatially in a virtual X-ray setup. Second, the patient-specific femoral shape and pose are reconstructed by fitting a deformable statistical shape and intensity model (SSIM) to the X-rays. The relative positioning between femur and implant is then assessed in terms of the displacement between the reconstructed 3D shape of the femur and the plate. We believe that the approach presented in this paper constitutes a meaningful tool to elucidate the effect of implant positioning on fracture healing and, ultimately, to derive load recommendations after surgery. T3 - ZIB-Report - 15-47 KW - 3d-reconstruction from 2d X-rays KW - statistical shape and intensity models KW - osteosynthesis follow-up Y1 - 2015 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-56217 SN - 1438-0064 ER - TY - CHAP A1 - Krämer, Martin A1 - Herrmann, Karl-Heinz A1 - Boeth, Heide A1 - Tycowicz, Christoph von A1 - König, Christian A1 - Zachow, Stefan A1 - Ehrig, Rainald A1 - Hege, Hans-Christian A1 - Duda, Georg A1 - Reichenbach, Jürgen T1 - Measuring 3D knee dynamics using center out radial ultra-short echo time trajectories with a low cost experimental setup T2 - ISMRM (International Society for Magnetic Resonance in Medicine), 23rd Annual Meeting 2015, Toronto, Canada Y1 - 2015 ER - TY - CHAP A1 - Ehlke, Moritz A1 - Heyland, Mark A1 - Märdian, Sven A1 - Duda, Georg A1 - Zachow, Stefan T1 - Assessing the relative positioning of an osteosynthesis plate to the patient-specific femoral shape from plain 2D radiographs T2 - Proceedings of the 15th Annual Meeting of CAOS-International (CAOS) N2 - We present a novel method to derive the surface distance of an osteosynthesis plate w.r.t. the patient­specific surface of the distal femur based on 2D X­ray images. Our goal is to study from clinical data, how the plate­to­bone distance affects bone healing. The patient­specific 3D shape of the femur is, however, seldom recorded for cases of femoral osteosynthesis since this typically requires Computed Tomography (CT), which comes at high cost and radiation dose. Our method instead utilizes two postoperative X­ray images to derive the femoral shape and thus can be applied on radiographs that are taken in clinical routine for follow­up. First, the implant geometry is used as a calibration object to relate the implant and the individual X­ray images spatially in a virtual X­ray setup. In a second step, the patient­specific femoral shape and pose are reconstructed in the virtual setup by fitting a deformable statistical shape and intensity model (SSIM) to the images. The relative positioning between femur and implant is then assessed in terms of displacement between the reconstructed 3D shape of the femur and the plate. A preliminary evaluation based on 4 cadaver datasets shows that the method derives the plate­to­bone distance with a mean absolute error of less than 1mm and a maximum error of 4.7 mm compared to ground truth from CT. We believe that the approach presented in this paper constitutes a meaningful tool to elucidate the effect of implant positioning on fracture healing. KW - 3d-­reconstruction from 2d X­rays KW - statistical shape and intensity models KW - fracture fixation of the distal femur KW - pose estimation Y1 - 2015 ER - TY - CHAP A1 - Ehlke, Moritz A1 - Heyland, Mark A1 - Märdian, Sven A1 - Duda, Georg A1 - Zachow, Stefan T1 - 3D Assessment of Osteosynthesis based on 2D Radiographs T2 - Proceedings of the Jahrestagung der Deutschen Gesellschaft für Computer- und Roboterassistierte Chirurgie (CURAC) N2 - We present a novel method to derive the surface distance of an osteosynthesis plate w.r.t. the patient-specific surface of the distal femur based on postoperative 2D radiographs. In a first step, the implant geometry is used as a calibration object to relate the implant and the individual X-ray images spatially in a virtual X-ray setup. Second, the patient- specific femoral shape and pose are reconstructed by fitting a deformable statistical shape and intensity model (SSIM) to the X-rays. The relative positioning between femur and implant is then assessed in terms of the displacement between the reconstructed 3D shape of the femur and the plate. We believe that the approach presented in this paper constitutes a meaningful tool to elucidate the effect of implant positioning on fracture healing and, ultimately, to derive load recommendations after surgery. KW - 3d-reconstruction from 2d X-rays KW - osteosynthesis follow-up KW - statistical shape and intensity models Y1 - 2015 SP - 317 EP - 321 ER - TY - CHAP A1 - Krämer, Martin A1 - Maggioni, Marta A1 - Tycowicz, Christoph von A1 - Brisson, Nick A1 - Zachow, Stefan A1 - Duda, Georg A1 - Reichenbach, Jürgen T1 - Ultra-short echo-time (UTE) imaging of the knee with curved surface reconstruction-based extraction of the patellar tendon T2 - ISMRM (International Society for Magnetic Resonance in Medicine), 26th Annual Meeting 2018, Paris, France N2 - Due to very short T2 relaxation times, imaging of tendons is typically performed using ultra-short echo-time (UTE) acquisition techniques. In this work, we combined an echo-train shifted multi-echo 3D UTE imaging sequence with a 3D curved surface reconstruction to virtually extract the patellar tendon from an acquired 3D UTE dataset. Based on the analysis of the acquired multi-echo data, a T2* relaxation time parameter map was calculated and interpolated to the curved surface of the patellar tendon. Y1 - 2018 ER - TY - CHAP A1 - Siqueira Rodrigues, Lucas A1 - Nyakatura, John A1 - Zachow, Stefan A1 - Israel, Johann Habakuk T1 - Design Challenges and Opportunities of Fossil Preparation Tools and Methods T2 - Proceedings of the 20th International Conference on Culture and Computer Science: Code and Materiality N2 - Fossil preparation is the activity of processing paleontological specimens for research and exhibition purposes. In addition to traditional mechanical extraction of fossils, preparation presently comprises non-destructive digital methods that are part of a relatively new field, namely virtual paleontology. Despite significant technological advances, both traditional and digital preparation remain cumbersome and time-consuming endeavors. However, this field has received scarce attention from a human-computer interaction perspective. The present study aims to elucidate the state-of-the-art for paleontological fossil preparation in order to determine its main challenges and start a conversation regarding opportunities for creating novel designs that tackle the field's current issues. We conducted a qualitative study involving both technical preparators and virtual paleontologists. The study was divided into two parts: First, we assembled technical preparators and paleontology researchers in a focus group session to discuss their workflows, obtain a preliminary understanding of their issues, and ideate solutions based on their counterparts' workflows. Next, we conducted a series of contextual inquiries involving direct observation and semi-structured in-depth interviews. We transcribed our recordings and examined the data through theoretical and inductive thematic analysis, clustering emerging themes and applying concepts from human-computer interaction and related fields. Our findings report on challenges faced by traditional and digital fossil preparators and potential opportunities to improve their tools and workflows. We contribute with a novel analysis of fossil preparation from an HCI perspective. Y1 - 2023 U6 - https://doi.org/10.1145/3623462.3623470 PB - Association for Computing Machinery CY - New York, NY, USA ER - TY - JOUR A1 - Amiranashvili, Tamaz A1 - Lüdke, David A1 - Li, Hongwei Bran A1 - Zachow, Stefan A1 - Menze, Bjoern T1 - Learning continuous shape priors from sparse data with neural implicit functions JF - Medical Image Analysis N2 - Statistical shape models are an essential tool for various tasks in medical image analysis, including shape generation, reconstruction and classification. Shape models are learned from a population of example shapes, which are typically obtained through segmentation of volumetric medical images. In clinical practice, highly anisotropic volumetric scans with large slice distances are prevalent, e.g., to reduce radiation exposure in CT or image acquisition time in MR imaging. For existing shape modeling approaches, the resolution of the emerging model is limited to the resolution of the training shapes. Therefore, any missing information between slices prohibits existing methods from learning a high-resolution shape prior. We propose a novel shape modeling approach that can be trained on sparse, binary segmentation masks with large slice distances. This is achieved through employing continuous shape representations based on neural implicit functions. After training, our model can reconstruct shapes from various sparse inputs at high target resolutions beyond the resolution of individual training examples. We successfully reconstruct high-resolution shapes from as few as three orthogonal slices. Furthermore, our shape model allows us to embed various sparse segmentation masks into a common, low-dimensional latent space — independent of the acquisition direction, resolution, spacing, and field of view. We show that the emerging latent representation discriminates between healthy and pathological shapes, even when provided with sparse segmentation masks. Lastly, we qualitatively demonstrate that the emerging latent space is smooth and captures characteristic modes of shape variation. We evaluate our shape model on two anatomical structures: the lumbar vertebra and the distal femur, both from publicly available datasets. Y1 - 2024 U6 - https://doi.org/10.1016/j.media.2024.103099 VL - 94 SP - 103099 ER -