Refine
Year of publication
Document Type
- Article (133)
- In Proceedings (118)
- Master's Thesis (33)
- Other (21)
- Book chapter (20)
- ZIB-Report (15)
- Book (12)
- In Collection (10)
- Poster (6)
- Doctoral Thesis (5)
Keywords
- Cultural Heritage (1)
- GPU acceleration (1)
- Gaussian Process (1)
- HCI (1)
- Interaction Techniques (1)
- Kiosk application (1)
- Optimal design of computer experiments (1)
- Picking (1)
- Procrustes analysis (1)
- Sparse kernels (1)
Institute
- Visual Data Analysis in Science and Engineering (378) (remove)
We present visual methods for the analysis and comparison of the results of curved fibre reconstruction algorithms, i.e., of algorithms extracting characteristics of curved fibres from X-ray computed tomography scans. In this work, we extend previous methods for the analysis and comparison of results of different fibre reconstruction algorithms or parametrisations to the analysis of curved fibres. We propose fibre dissimilarity measures for such curved fibres and apply these to compare multiple results to a specified reference. We further propose visualisation methods to analyse differences between multiple results quantitatively and qualitatively. In two case studies, we show that the presented methods provide valuable insights for advancing and parametrising fibre reconstruction algorithms, and support in improving their results in characterising curved fibres.
We present time-space trade-offs for computing the Euclidean minimum spanning tree of a set S of n point-sites in the plane. More precisely, we assume that S resides in a random-access memory that can only be read. The edges of the Euclidean minimum spanning tree EMST(S) have to be reported sequentially, and they cannot be accessed or modified afterwards. There is a parameter s in {1, ..., n} so that the algorithm may use O(s) cells of read-write memory (called the workspace) for its computations. Our goal is to find an algorithm that has the best possible running time for any given s between 1 and n.
We show how to compute EMST(S) in O(((n^3)/(s^2)) log s) time with O(s) cells of workspace, giving a smooth trade-off between the two best-known bounds O(n^3) for s = 1 and O(n log n) for s = n. For this, we run Kruskal's algorithm on the "relative neighborhood graph" (RNG) of S. It is a classic fact that the minimum spanning tree of RNG(S) is exactly EMST(S). To implement Kruskal's algorithm with O(s)
cells of workspace, we define s-nets, a compact representation of planar graphs. This allows us to efficiently maintain and update the components of the current minimum spanning forest as the edges are being inserted.
Routing in polygonal domains
(2019)
We consider the problem of routing a data packet through the visibility graph of a polygonal domain P with n vertices and h holes. We may preprocess P to obtain a "label" and a "routing table" for each vertex of P. Then, we must be able to route a data packet between any two vertices p and q of P, where each step must use only the label of the target node q and the routing table of the current node.
For any fixed epsilon > 0, we present a routing scheme that always achieves a routing path whose length exceeds the shortest path by a factor of at most 1 + epsilon. The labels have O(log n) bits, and the routing tables are of size O(((epsilon^-1)+h)log n). The preprocessing time is O((n^2)log n). It can be improved to O(n^2) for simple polygons.
We present visual analysis methods for the evaluation of tomographic fiber reconstruction algorithms by means of analysis, visual debugging and comparison of reconstructed fibers in materials science. The methods are integrated in a tool (FIAKER) that supports the entire workflow. It enables the analysis of various fiber reconstruction algorithms, of differently parameterized fiber reconstruction algorithms and of individual steps in iterative fiber reconstruction algorithms. Insight into the performance of fiber reconstruction algorithms is obtained by a list‐based ranking interface. A 3D view offers interactive visualization techniques to gain deeper insight, e.g., into the aggregated quality of the examined fiber reconstruction algorithms and parameterizations. The tool was designed in close collaboration with researchers who work with fiber‐reinforced polymers on a daily basis and develop algorithms for tomographic reconstruction and characterization of such materials. We evaluate the tool using synthetic datasets as well as tomograms of real materials. Five case studies certify the usefulness of the tool, showing that it significantly accelerates the analysis and provides valuable insights that make it possible to improve the fiber reconstruction algorithms. The main contribution of the paper is the well‐considered combination of methods and their seamless integration into a visual tool that supports the entire workflow. Further findings result from the analysis of (dis‐)similarity measures for fibers as well as from the discussion of design decisions. It is also shown that the generality of the analytical methods allows a wider range of applications, such as the application in pore space analysis.
Background
Geometric parameters have been proposed for prediction of cerebral aneurysm rupture risk. Predicting the rupture risk for incidentally detected unruptured aneurysms could help clinicians in their treatment decision. However, assessment of geometric parameters depends on several factors, including the spatial resolution of the imaging modality used and the chosen reconstruction procedure. The aim of this study was to investigate the uncertainty of a variety of previously proposed geometric parameters for rupture risk assessment, caused by variability of reconstruction procedures.
Materials
26 research groups provided segmentations and surface reconstructions of five cerebral aneurysms as part of the Multiple Aneurysms AnaTomy CHallenge (MATCH) 2018. 40 dimensional and non-dimensional geometric parameters, describing aneurysm size, neck size, and irregularity of aneurysm shape, were computed. The medians as well as the absolute and relative uncertainties of the parameters were calculated. Additionally, linear regression analysis was performed on the absolute uncertainties and the median parameter values.
Results
A large variability of relative uncertainties in the range between 3.9 and 179.8% was found. Linear regression analysis indicates that some parameters capture similar geometric aspects. The lowest uncertainties < 6% were found for the non-dimensional parameters isoperimetric ratio, convexity ratio, and ellipticity index. Uncertainty of 2D and 3D size parameters was significantly higher than uncertainty of 1D parameters. The most extreme uncertainties > 80% were found for some curvature parameters.
Conclusions
Uncertainty analysis is essential on the road to clinical translation and use of rupture risk prediction models. Uncertainty quantification of geometric rupture risk parameters provided by this study may help support development of future rupture risk prediction models.
The historical importance of ancient manuscripts is unique since they provide information about the heritage of ancient cultures. Often texts are hidden in rolled or folded documents. Due to recent impro- vements in sensitivity and resolution, spectacular disclosures of rolled hidden texts were possible by X-ray tomography. However, revealing text on folded manuscripts is even more challenging. Manual unfolding is often too risky in view of the fragile condition of fragments, as it can lead to the total loss of the document. X-ray tomography allows for virtual unfolding and enables non-destructive access to hid- den texts. We have recently demonstrated the procedure and tested unfolding algorithms on a mockup sample. Here, we present results on unfolding ancient papyrus packages from the papyrus collection of the Musée du Louvre, among them objects folded along approximately orthogonal folding lines. In one of the packages, the first identification of a word was achieved, the Coptic word for “Lord”.
This work introduces methods for analyzing the three imaging modalities delivered by Talbot-Lau grating interferometry X-ray computed tomography (TLGI-XCT). The first problem we address is providing a quick way to show a fusion of all three modal- ities. For this purpose the tri-modal transfer function widget is introduced. The widget controls a mixing function that uses the output of the transfer functions of all three modalities, allowing the user to create one customized fused image. A second problem prevalent in processing TLGI-XCT data is a lack of tools for analyzing the segmentation process of such multimodal data. We address this by providing methods for computing three types of uncertainty: From probabilistic segmentation algorithms, from the voxel neighborhoods as well as from a collection of results. We furthermore introduce a linked views interface to explore this data. The techniques are evaluated on a TLGI-XCT scan of a carbon-fiber reinforced dataset with impact damage. We show that the transfer function widget accelerates and facilitates the exploration of this dataset, while the uncertainty analysis methods give insights into how to tweak and improve segmentation algorithms for more suitable results.
Epithelial mesenchymal transition (EMT) process has been shown as highly relevant to cancer prognosis. However, although different biological network-based biomarker identification methods have been proposed to predict cancer prognosis, EMT network has not been directly used for this purpose. In this study, we constructed an EMT regulatory network consisting of 87 molecules and tried to select features that are useful for prognosis prediction in Lung Adenocarcinoma (LUAD). To incorporate multiple molecular profiles, we obtained four types of molecular data including mRNA-Seq, copy number alteration (CNA), DNA methylation, and miRNA-Seq data from The Cancer Genome Atlas. The data were mapped to the EMT network in three alternative ways: mRNA-Seq and miRNA-Seq, DNA methylation, and CNA and miRNA-Seq. Each mapping was employed to extract five different sets of features using discretization and network-based biomarker identification methods. Each feature set was then used to predict prognosis with SVM and logistic regression classifiers. We measured the prediction accuracy with AUC and AUPR values using 10 times 10-fold cross validation. For a more comprehensive evaluation, we also measured the prediction accuracies of clinical features, EMT plus clinical features, randomly picked 87 molecules from each data mapping, and using all molecules from each data type. Counter-intuitively, EMT features do not always outperform randomly selected features and the prediction accuracies of the five feature sets are mostly not significantly different. Clinical features are shown to give the highest prediction accuracies. In addition, the prediction accuracies of both EMT features and random features are comparable as using all features (more than 17,000) from each data type.
A great amount of material properties is strongly influenced by dislocations, the carriers of plastic deformation. It is therefore paramount to have appropriate tools to quantify dislocation substructures with regard to their features, e.g., dislocation density, Burgers vectors or line direction. While the transmission electron microscope (TEM) has been the most widely-used equipment implemented to investigate dislocations, it usually is limited to the two-dimensional (2D) observation of three-dimensional (3D) structures. We reconstruct, visualize and quantify 3D dislocation substructure models from only two TEM images (stereo pairs) and assess the results. The reconstruction is based on the manual interactive tracing of filiform objects on both images of the stereo pair. The reconstruction and quantification method are demonstrated on dark field (DF) scanning (S)TEM micrographs of dislocation substructures imaged under diffraction contrast conditions. For this purpose, thick regions (>300 nm) of TEM foils are analyzed, which are extracted from a Ni-base superalloy single crystal after high temperature creep deformation. It is shown how the method allows 3D quantification from stereo pairs in a wide range of tilt conditions, achieving line length and orientation uncertainties of 3% and 7°, respectively. Parameters that affect the quality of such reconstructions are discussed.
In 2004, a team of researchers realized a semi-immersive interactive gallery installation, visualizing an 1834 Mediterranean garden, introduced as “italienisches Kunststück” (Italian legerdemain) by Peter Joseph Lenné. The park was originally realized on the grounds of Schloss Sanssouci in Potsdam, Germany. The installation centered on highly detailed renderings of hundreds of plants projected upon a panoramic display. Interactivity was expressed with a tangible interface which (while presently dated) we believe remains without near-precedent then or since. We present the installation (experienced by roughly 20,000 visitors), focusing on the interaction aspects. We introduce new book and table/door-format mockups. Drawing upon a heuristic of the scientist-philosopher Freeman Dyson, we consider grounded future prospect variations in the contexts of 2018, 2032, and 2202. We see this exercise as prospectively generalizing to a variety of similar and widely diverse application domains.
The use of tangible interfaces for navigation of landscape scenery – for example, lost places re-created in 3D – has been pursued and articulated as a promising, impactful application of interactive visualization. In this demonstration, we present a modern, low-cost implementation of a previously-realized multimodal gallery installation. Our demonstration centers upon the versatile usage of a smartphone for sensing, navigating, and (optionally) displaying element on a physical surface in tandem with a larger, more immersive display.
A great amount of material properties is strongly influenced by dislocations, the carriers of plastic deformation. It is therefore paramount to have appropriate tools to quantify dislocation substructures with regard to their features, e.g., dislocation density, Burgers vectors or line direction. While the transmission electron microscope (TEM) has been the most widely-used equipment implemented to investigate dislocations, it usually is limited to the two-dimensional (2D) observation of three-dimensional (3D) structures. We reconstruct, visualize and quantify 3D dislocation substructure models from only two TEM images (stereo-pairs) and assess the results. The reconstruction is based on the manual interactive tracing of filiform objects on both images of the stereo-pair. The reconstruction and quantification method are demonstrated on dark field (DF) scanning (S)TEM micrographs of dislocation substructures imaged under diffraction contrast conditions. For this purpose, thick regions (> 300 nm) of TEM foils are analyzed, which are extracted from a Ni-base superalloy single crystal after high temperature creep deformation. It is shown how the method allows 3D quantification from stereo-pairs in a wide range of tilt conditions, achieving line length and orientation uncertainties of 3 % and 7°, respectively. Parameters that affect the quality of such reconstructions are discussed.
For Kendall’s shape space we determine analytically Jacobi fields and parallel transport, and compute geodesic regression. Using the derived expressions, we can fully leverage the geometry via Riemannian optimization and reduce the computational expense by several orders of magnitude. The methodology is demonstrated by performing a longitudinal statistical analysis of epidemiological shape data.
As application example we have chosen 3D shapes of knee bones, reconstructed from image data of the Osteoarthritis Initiative. Comparing subject groups with incident and developing osteoarthritis versus normal controls, we find clear differences in the temporal development of femur shapes. This paves the way for early prediction of incident knee osteoarthritis, using geometry data only.
The analysis and visualization of nucleic acids (RNA and DNA) is playing an increasingly important role due to their fundamental importance for all forms of life and the growing number of known 3D structures of such molecules. The great complexity of these structures, in particular, those of RNA, demands interactive visualization to get deeper insights into the relationship between the 2D secondary structure motifs and their 3D tertiary structures. Over the last decades, a lot of research in molecular visualization has focused on the visual exploration of protein structures while nucleic acids have only been marginally addressed. In contrast to proteins, which are composed of amino acids, the ingredients of nucleic acids are nucleotides. They form structuring patterns that differ from those of proteins and, hence, also require different visualization and exploration techniques. In order to support interactive exploration of nucleic acids, the computation of secondary structure motifs as well as their visualization in 2D and 3D must be fast. Therefore, in this paper, we focus on the performance of both the computation and visualization of nucleic acid structure. We present a ray casting-based visualization of RNA and DNA secondary and tertiary structures, which enables for the first time real-time visualization of even large molecular dynamics trajectories. Furthermore, we provide a detailed description of all important aspects to visualize nucleic acid secondary and tertiary structures. With this, we close an important gap in molecular visualization.
In molecular structure analysis and visualization, the molecule’s atoms are often modeled as hard spheres parametrized by their positions and radii. While the atom positions result from experiments or molecular simulations, for the radii typically values are taken from literature. Most often, van der Waals (vdW) radii are used, for which diverse values exist. As a consequence, different visualization and analysis tools use different atomic radii, and the analyses are less objective than often believed. Furthermore, for the geometric accessibility analysis of molecular structures, vdW radii are not well suited. The reason is that during the molecular dynamics simulation, depending on the force field and the kinetic energy in the system, non-bonded atoms can come so close to each other that their vdW spheres intersect. In this paper, we introduce a new kind of atomic radius, called atomic accessibility radius’, that better characterizes the accessibility of an atom in a given molecular trajectory. The new radii reflect the movement possibilities of atoms in the simulated physical system. They are computed by solving a linear program that maximizes the radii of the atoms under the constraint that non-bonded spheres do not intersect in the considered molecular trajectory. Using this data-driven approach, the actual accessibility of atoms can be visualized more precisely.
Simulations and measurements of blood and air flow inside the human circulatory and respiratory system play an increasingly important role in personalized medicine for prevention, diagnosis, and treatment of diseases. This survey focuses on three main application areas. (1) Computational Fluid Dynamics (CFD) simulations of blood flow in cerebral aneurysms assist in predicting the outcome of this pathologic process and of therapeutic interventions. (2) CFD simulations of nasal airflow allow for investigating the effects of obstructions and deformities and provide therapy decision support. (3) 4D Phase-Contrast (4D PC) Magnetic Resonance Imaging (MRI) of aortic hemodynamics supports the diagnosis of various vascular and valve pathologies as well as their treatment. An investigation of the complex and often dynamic simulation and measurement data requires the coupling of sophisticated visualization, interaction, and data analysis techniques.
In this paper, we survey the large body of work that has been conducted within this realm. We extend previous surveys by incorporating nasal airflow, addressing the joint investigation of blood flow and vessel wall properties, and providing a more fine-granular taxonomy of the existing techniques. From the survey, we extract major research trends and identify open problems and future challenges. The survey is intended for researchers interested in medical flow but also more general, in the combined visualization of physiology and anatomy, the extraction of features from flow field data and feature-based visualization, the visual comparison of different simulation results, and the interactive visual analysis of the flow field and derived characteristics.
Intact joints are necessary for skeletal function and mobility in daily life. A healthy musculoskeletal system is the basis for a functional cardiovascular
system as well as an intact immune system. Locomotion, physiotherapy, and various forms of patient activity are essential clinical therapies used in the treatment of neurodegeneration, stroke, diabetes, and cancer. Mobility is substantially impaired with degeneration of joints and, in advanced stages, nighttime pain and sleep disturbance are particularly cumbersome.
Osteoarthritis (OA) is also known as degenerative joint disease. OA involves structural and compositional changes in the articular cartilage, as well as in the calcified cartilage, subchondral cortical bone, subchondral cancellous bone, meniscus, joint capsular tissue, and synovium; which eventually lead to degeneration of these tissues comprising synovial joints.
To improve existing weather prediction and reanalysis capabilities, high-resolution and multi-modal climate data becomes an increasingly important topic. The advent of increasingly dense numerical simulation of atmospheric phenomena, provides new means to better understand dynamic processes and to visualize structural flow patterns that remain hidden otherwise. In the presented illustrations we demonstrate an advanced technique to visualize multiple scales of dense flow fields and Lagrangian patterns therein, simulated by state-of-the-art simulation models for each scale. They provide a deeper insight into the structural differences and patterns that occur on each scale and highlight the complexity of flow phenomena in our atmosphere.
This paper is associated with a poster winner of a 2016 APS/DFD Milton van Dyke Award for work presented at the DFD Gallery of Fluid Motion. The original poster is available from the Gallery of Fluid Motion, https://doi.org/10.1103/APS.DFD.2016.GFM.P0030
In atmospheric sciences, sizes of data sets grow continuously due to increasing resolutions. A central task is the comparison of spatiotemporal fields, to assess different simulations and to compare simulations with observations. A significant information reduction is possible by focusing on geometric-topological features of the fields or on derived meteorological objects. Due to the huge size of the data sets, spatial features have to be extracted in time slices and traced over time. Fields with chaotic component, i.e. without 1:1 spatiotemporal correspondences, can be compared by looking upon statistics of feature properties. Feature extraction, however, requires a clear mathematical definition of the features - which many meteorological objects still lack. Traditionally, object extractions are often heuristic, defined only by implemented algorithms, and thus are not comparable. This work surveys our framework designed for efficient development of feature tracking methods and for testing new feature definitions. The framework supports well-established visualization practices and is being used by atmospheric researchers to diagnose and compare data.
Comets display with decreasing solar distance an increased emission of gas and dust particles, leading to the formation of the coma and tail. Spacecraft missions provide insight in the temporal and spatial variations of the dust and gas sources located on the cometary nucleus. For the case of comet 67P/Churyumov-Gerasimenko (67P/C-G), the long-term obser- vations from the Rosetta mission point to a homogeneous dust emission across the entire illuminated surface. Despite the homogeneous initial dis- tribution, a collimation in jet-like structures becomes visible. We propose that this observation is linked directly to the complex shape of the nucleus and projects concave topographical features into the dust coma. To test this hypothesis, we put forward a gas-dust description of 67P/C-G, where gravitational and gas forces are accurately determined from the surface mesh and the rotation of the nucleus is fully incorporated. The emerging jet-like structures persist for a wide range of gas-dust interactions and show a dust velocity dependent bending.
In this paper, we propose a new method for optimizing the blue noise characteristics of point sets. It is based on Procrustes analysis, a technique for adjusting shapes to each other by applying optimal elements of an appropriate transformation group. We adapt this technique to the problem at hand and introduce a very simple, efficient and provably convergent point set optimizer.
To improve existing weather prediction and reanalysis capabilities, high-resolution and multi-modal climate data becomes an increasingly important topic. The advent of increasingly dense numerical simulation of atmospheric phenomena, provides new means to better understand dynamic processes and to visualize structural flow patterns that remain hidden otherwise. In the presented illustrations we demonstrate an advanced technique to visualize multiple scales of dense flow fields and Lagrangian patterns therein, simulated by state-of-the-art simulation models for each scale. They provide a deeper insight into the structural differences and patterns that occur on each scale and highlight the complexity of flow phenomena in our atmosphere.
Structural properties of molecules are of primary concern in many fields. This report provides a comprehensive overview on techniques that have been developed in the fields of molecular graphics and visualization with a focus on applications in structural biology. The field heavily relies on computerized geometric and visual representations of three-dimensional, complex, large and time-varying molecular structures. The report presents a taxonomy that demonstrates which areas of molecular visualization have already been extensively investigated and where the field is currently heading. It discusses visualizations for molecular structures, strategies for efficient display regarding image quality and frame rate, covers different aspects of level of detail and reviews visualizations illustrating the dynamic aspects of molecular simulation data. The survey concludes with an outlook on promising and important research topics to foster further success in the development of tools that help to reveal molecular secrets.
Large-eddy simulations (LES) with the new ICOsahedral Non-hydrostatic atmosphere model (ICON) covering Germany are evaluated for four days in spring 2013 using observational data from various sources. Reference simulations with the established Consortium for Small-scale Modelling (COSMO) numerical weather prediction model and further standard LES codes are performed and used as a reference. This comprehensive evaluation approach covers multiple parameters and scales, focusing on boundary-layer variables, clouds and precipitation. The evaluation points to the need to work on parametrizations influencing the surface energy balance, and possibly on ice cloud microphysics. The central purpose for the development and application of ICON in the LES configuration is the use of simulation results to improve the understanding of moist processes, as well as their parametrization in climate models. The evaluation thus aims at building confidence in the model's ability to simulate small- to mesoscale variability in turbulence, clouds and precipitation. The results are encouraging: the high-resolution model matches the observed variability much better at small- to mesoscales than the coarser resolved reference model. In its highest grid resolution, the simulated turbulence profiles are realistic and column water vapour matches the observed temporal variability at short time-scales. Despite being somewhat too large and too frequent, small cumulus clouds are well represented in comparison with satellite data, as is the shape of the cloud size spectrum. Variability of cloud water matches the satellite observations much better in ICON than in the reference model. In this sense, it is concluded that the model is fit for the purpose of using its output for parametrization development, despite the potential to improve further some important aspects of processes that are also parametrized in the high-resolution model.
Ancient Egyptian papyri are often folded, rolled up or kept as small packages, sometimes even sealed. Physically unrolling or unfolding these packages might severely damage them. We demonstrate a way to get access to the hidden script without physical unfolding by employing computed tomography and mathematical algorithms for virtual unrolling and unfolding. Our algorithmic approaches are combined with manual interaction. This provides the necessary flexibility to enable the unfolding of even complicated and partly damaged papyrus packages. In addition, it allows us to cope with challenges posed by the structure of ancient papyrus, which is rather irregular, compared to other writing substrates like metallic foils or parchment. Unfolding of packages is done in two stages. In the first stage, we virtually invert the physical folding process step by step until the partially unfolded package is topologically equivalent to a scroll or a papyrus sheet folded only along one fold line. To minimize distortions at this stage, we apply the method of moving least squares. In the second stage, the papyrus is simply flattened, which requires the definition of a medial surface. We have applied our software framework to several papyri. In this work, we present the results of applying our approaches to mockup papyri that were either rolled or folded along perpendicular fold lines. In the case of the folded papyrus, our approach represents the first attempt to address the unfolding of such complicated folds.
In this report we review and structure the branch of molecular visualization that is concerned with the visual analysis of cavities in macromolecular protein structures. First the necessary background, the domain terminology, and the goals of analytical reasoning are introduced. Based on a comprehensive collection of relevant research works, we present a novel classification for cavity detection approaches and structure them into four distinct classes: grid-based, Voronoi-based, surface-based, and probe-based methods. The subclasses are then formed by their combinations. We match these approaches with corresponding visualization technologies starting with direct 3D visualization, followed with non-spatial visualization techniques that for example abstract the interactions between structures into a relational graph, straighten the cavity of interest to see its profile in one view, or aggregate the time sequence into a single contour plot. We also discuss the current state of methods for the visual analysis of cavities in dynamic data such as molecular dynamics simulations. Finally, we give an overview of the most common tools that are actively developed and used in the structural biology and biochemistry research. Our report is concluded by an outlook on future challenges in the field.
Statistical methods to design computer experiments usually rely on a Gaussian process (GP) surrogate model, and typically aim at selecting design points (combinations of algorithmic and model parameters) that minimize the average prediction variance, or maximize the prediction accuracy for the hyperparameters of the GP surrogate.
In many applications, experiments have a tunable precision, in the sense that one software parameter controls the tradeoff between accuracy and computing time (e.g., mesh size in FEM simulations or number of Monte-Carlo samples).
We formulate the problem of allocating a budget of computing time over a finite set of candidate points for the goals mentioned above. This is a continuous optimization problem, which is moreover convex whenever the tradeoff function accuracy vs. computing time is concave.
On the other hand, using non-concave weight functions can help to identify sparse designs. In addition, using sparse kernel approximations drastically reduce the cost per iteration of the multiplicative weights updates that can be used to solve this problem.
Molecular processes such as protein folding or ligand-receptor-binding can be understood by analyzing the free energy landscape. Those processes are often metastable, i.e. the molecular systems remain in basins around local minima of the free energy landscape, and in rare cases undergo gauche transitions between metastable states by passing saddle-points of this landscape. By discretizing the configuration space, this can be modeled as a discrete Markov process. One way to compute the transition rates between conformations of a molecular system is by utilizing Transition Path Theory and the concept of committor functions. A fundamental problem from the computational point of view is that many time-scales are involved, ranging from 10^(-14) sec for the fastest motion to 10^(-6) sec or more for conformation changes that cause biological effects.
The goal of our work is to provide a better understanding of such transitions in configuration space on various time-scales by analyzing characteristic scalar functions topologically and geometrically. We are developing suitable visualization and interaction techniques to support our analysis. For example, we are analyzing a transition rate indicator function by computing and visualizing its Reeb graph together with the sets of molecular states corresponding to maxima of the transition rate indicator function. A particular challenge is the high dimensionality of the domain which does not allow for a straightforward visualization of the function.
The computational topology approach to the analysis of the transition rate indicator functions for a molecular system allows to explore different time scales of the system by utilizing coarser or finer topological partitioning of the function. A specific goal is the development of tools for analyzing the hierarchy of these partitionings. This approach tackles the analysis of a complex and sparse dataset from a different angle than the well-known spectral analysis of Markov State Models.
Monocrystaline Ni-base superalloys are the material of choice for first row blades in jet engine gas turbines. Using a novel visualization tool for 3D reconstruction and visualization of dislocation line segments from stereo-pairs of scanning transmission electron microscopies, the superdislocation substructures in Ni-base superalloy LEK 94 (crept to ε = 26%) are characterized. Probable scenarios are discussed, how these dislocation substructures form.
We consider the spectral proper orthogonal decomposition (SPOD) for experimental data of a turbulent swirling jet. This newly introduced method combines the advantages of spectral methods, such as Fourier decomposition or dynamic mode decomposition, with the energy-ranked proper orthogonal decomposition (POD). This poster visualizes how the modal energy spectrum transitions from the spectral purity of Fourier space to the sparsity of POD space. The transition is achieved by changing a single parameter – the width of the SPOD filter. Each dot in the 3D space corresponds to an SPOD mode pair, where the size and color indicates its spectral coherence. What we notice is that neither the Fourier nor the POD spectrum achieves a clear separation of the dynamic phenomena. Scanning through the graph from the front plane (Fourier) to the back plane (POD), we observe how three highly coherent SPOD modes emerge from the dispersed Fourier spectrum and later branch out into numerous POD modes.
The spatial properties of these three individual SPOD modes are displayed in the back of the graph using line integral convolution colored by vorticity. The first two modes correspond to single-helical global instabilities that are well known for these flows. Their coexistence, however, has not been observed until now. The third mode is of double- helical shape and has not been observed so far. For this considered data set and many others, the SPOD is superior in identification of coherent structures in turbulent flows. Hopefully, it gives access to new fluid dynamic phenomena and enriches the available methods.
Intakte Gelenke sind eine Voraussetzung für das Funktionieren des Skeletts und die Mobilität im Lebensalltag. Ein gesunder Bewegungsapparat ist die Grundlage für die Funktionsfähigkeit des Herz-Kreislauf-Systems wie auch der Immunabwehr. Bewegungs- und Physiotherapie sowie verschiedene Formen der Patientenaktivität stellen essenzielle klinische Ansätze in der Behandlung von neurodegenerativen Erkrankungen, Schlaganfall, Diabetes und Krebs dar. Kommt es zu degenerativen Veränderungen von Gelenken, bedeutet dies eine wesentliche Beeinträchtigung der Mobilität. Nächtliche Schmerzen und Schlafstörungen treten in fortgeschrittenen Stadien auf und sind besonders belastend. Arthrose wird auch als degenerative Gelenkerkrankung bezeichnet. Sie geht mit Veränderungen in der Struktur und Zusammensetzung des Gelenkknorpels wie auch des verkalkten Knorpels, der subchondralen Kortikalis, der subchondralen Spongiosa, des Meniskus, der Gelenkkapsel und der Synovialis einher, was schließlich zur Degeneration dieser Gewebe führt, aus denen sich die Synovialgelenke zusammensetzen.
Purpose: To account for the impact of turbulence in blood damage modeling, a novel approach based on the generation of instantaneous flow fields from RANS simulations is proposed.
Methods: Turbulent flow in a bileaflet mechanical heart valve was simulated using RANS-based (SST k-ω) flow solver using FLUENT 14.5. The calculated Reynolds shear stress (RSS) field is transformed into a set of divergence-free random vector fields representing turbulent velocity fluctuations using procedural noise functions. To consider the random path of the blood cells, instantaneous flow fields were computed for each time step by summation of RSS-based divergence-free random and mean velocity fields. Using those instantaneous flow fields, instantaneous pathlines and corresponding point-wise instantaneous shear stresses were calculated. For a comparison, averaged pathlines based on mean velocity field and respective viscous shear stresses together with
RSS values were calculated. Finally, the blood damage index (hemolysis) was integrated along the averaged and instantaneous pathlines using a power law approach and then compared.
Results: Using RSS in blood damage modeling without a correction factor overestimates damaging stress and thus the blood damage (hemolysis). Blood damage histograms based on both presented approaches differ.
Conclusions: A novel approach to calculate blood damage without using RSS as a damaging parameter is established. The results of our numerical experiment support the hypothesis that the use of RSS as a damaging parameter should be avoided.
A framework is proposed for extracting features in 2D transient flows, based on the acceleration field to ensure Galilean invariance. The minima of the acceleration magnitude, i.e. a superset of the acceleration zeros, are extracted and discriminated into vortices and saddle points --- based on the spectral properties of the velocity Jacobian. The extraction of topological features is performed with purely combinatorial algorithms from discrete computational topology. The feature points are prioritized with persistence, as a physically meaningful importance measure. These features are tracked in time with a robust algorithm for tracking features. Thus a space-time hierarchy of the minima is built and vortex merging events are detected. The acceleration feature extraction strategy is applied to three two-dimensional shear flows: (1) an incompressible periodic cylinder wake, (2) an incompressible planar mixing layer and (3) a weakly compressible planar jet. The vortex-like acceleration feature points are shown to be well aligned with acceleration zeros, maxima of the vorticity magnitude, minima of pressure field and minima of λ2.
A framework is proposed for extracting features in 2D transient flows, based on the acceleration field to ensure Galilean invariance. The minima of the acceleration magnitude, i.e. a superset of the acceleration zeros, are extracted and discriminated into vortices and saddle points --- based on the spectral properties of the velocity Jacobian. The extraction of topological features is performed with purely combinatorial algorithms from discrete computational topology. The feature points are prioritized with persistence, as a physically meaningful importance measure. These features are tracked in time with a robust algorithm for tracking features. Thus a space-time hierarchy of the minima is built and vortex merging events are detected. The acceleration feature extraction strategy is applied to three two-dimensional shear flows:
(1) an incompressible periodic cylinder wake,
(2) an incompressible planar mixing layer and
(3) a weakly compressible planar jet.
The vortex-like acceleration feature points are shown to be well aligned with acceleration zeros, maxima of the vorticity magnitude, minima of pressure field and minima of λ2.
Diese Bachelorarbeit beschäftigt sich mit der Entwicklung eines allgemeinen Verfahrens, welches die Dicke der mineralisierten Schicht von Haikieferelementen automatisch bestimmt. Dabei soll das Verfahren die Dicke näherungsweise im zweidimensionalen (2D) Raum sowie im dreidimensionalen (3D) Raum anhand von Computertomografie-Scans berechnen (im Folgenden als zweidimensionaler bzw. dreidimensionaler Fall bezeichnet).
Es werden drei mögliche Verfahren eingeführt und im Anschluss auf ihre Verwendbarkeit analysiert. Für die Implementierung zur Dickenbestimmung wird der Kern der Rayburst Sampling Methode verwendet und im Weiteren für den 2D-Raum durch kleinere Optimierungen verbessert. Die Überprüfung der Genauigkeit des für den zweidimensionalen Fall entwickelten Programms erfolgt manuell. Für einen Vergleich im 3D-Raum wird ein zweites Verfahren programmiert, das auf der Berechnung der Isoflächen basiert.
Diese Arbeit ist in den Bereich der angewandten Mathematik mit dem Schwerpunkt Informatik einzuordnen. Das entwickelte Programm wird im Anschluss Anwendung im Bereich der Biologie am Max-Planck-Institut für Grenzflächen- und Kolloidforschung Potsdam-Golm finden.
Creating motions of objects or characters that are physically plausible and follow an animator’s intent is a key task in computer animation. The spacetime constraints paradigm is a valuable approach to this problem, but it suffers from high computational costs. Based on spacetime constraints, we propose a technique that controls the motion of deformable objects and offers an interactive response. This is achieved by a model reduction of the underlying variational problem, which combines dimension reduction, multipoint linearization, and decoupling of ODEs. After a preprocess, the cost for creating or editing a motion is reduced to solving a number of one-dimensional spacetime problems, whose solutions are the wiggly splines introduced by Kass and Anderson [2008]. We achieve interactive response using a new fast and robust numerical scheme for solving a set of one-dimensional problems based on an explicit representation of the wiggly splines.
Purpose/Aims of the Study: Bone’s hierarchical structure can be visualized using a variety of methods. Many techniques, such as light and electron microscopy generate two-dimensional (2D) images, while micro computed tomography (μCT) allows a direct representation of the three-dimensional (3D) structure. In addition, different methods provide complementary structural information, such as the arrangement of organic or inorganic compounds. The overall aim of the present study is to answer bone research questions by linking information of different 2D and 3D imaging techniques. A great challenge in combining different methods arises from the fact that they usually reflect different characteristics of the real structure.
Materials and Methods: We investigated bone during healing by means of μCT and a couple of 2D methods. Backscattered electron images were used to qualitatively evaluate the tissue’s calcium content and served as a position map for other experimental data. Nanoindentation and X-ray scattering experiments were performed to visualize mechanical and structural properties. Results: We present an approach for the registration of 2D data in a 3D μCT reference frame, where scanning electron microscopies serve as a methodic link. Backscattered electron images are perfectly suited for registration into μCT reference frames, since both show structures based on the same physical principles. We introduce specific registration tools that have been developed to perform the registration process in a semi-automatic way.
Conclusions: By applying this routine, we were able to exactly locate structural information (e.g. mineral particle properties) in the 3D bone volume. In bone healing studies this will help to better understand basic formation, remodeling and mineralization processes.
Computational Left-Ventricle Reconstruction from MRI Data for Patient-specific Cardiac Simulations
(2014)
The goal of visualization is to effectively and accurately communicate data. Visualization research has often overlooked the errors and uncertainty which accompany the scientific process and describe key characteristics used to fully understand the data. The lack of these representations can be attributed, in part, to the inherent difficulty in defining, characterizing, and controlling this uncertainty, and in part, to the difficulty in including additional visual metaphors in a well designed, potent display. However, the exclusion of this information cripples the use of visualization as a decision making tool due to the fact that the display is no longer a true representation of the data. This systematic omission of uncertainty commands fundamental research within the visualization community to address, integrate, and expect uncertainty information. In this chapter, we outline sources and models of uncertainty, give an overview of the state-of-the-art, provide general guidelines, outline small exemplary applications, and finally, discuss open problems in uncertainty visualization.
Connectomics is a branch of neuroscience that attempts to create a connectome, i.e., a complete map of the neuronal system and all connections between neuronal structures. This representation can be used to understand how functional brain states emerge from their underlying anatomical structures and how dysfunction and neuronal diseases arise. We review the current state-of-the-art of visualization and image processing techniques in the field of connectomics and describe a number of challenges. After a brief summary of the biological background and an overview of relevant imaging modalities, we review current techniques to extract connectivity information from image data at macro-, meso- and microscales. We also discuss data integration and neural network modeling, as well as the visualization, analysis and comparison of brain networks.
We present the first general scheme to describe all four types of characteristic curves of flow fields – stream, path, streak, and time lines – as tangent curves of a derived vector field. Thus, all these lines can be obtained by a simple integration of an autonomous ODE system. Our approach draws on the principal ideas of the recently introduced tangent curve description of streak lines. We provide the first description of time lines as tangent curves of a derived vector field, which could previously only be constructed in a geometric manner. Furthermore, our scheme gives rise to new types of curves. In particular, we introduce advected stream lines as a parameter-free variant of the time line metaphor. With our novel mathematical description of characteristic curves, a large number of feature extraction and analysis tools becomes available for all types of characteristic curves, which were previously only available for stream and path lines. We will highlight some of these possible applications including the computation of time line curvature fields and the extraction of cores of swirling advected stream lines.
We present a Visual Analytics approach that addresses the detection of interesting patterns in numerical time series, specifically from environmental sciences. Crucial for the detection of interesting temporal patterns are the time scale and the starting points one is looking at. Our approach makes no assumption about time scale and starting position of temporal patterns and consists of three main steps: an algorithm to compute statistical values for all possible time scales and starting positions of intervals, visual identification of potentially interesting patterns in a matrix visualization, and interactive exploration of detected patterns. We demonstrate the utility of this approach in two scientific scenarios and explain how it allowed scientists to gain new insight into the dynamics of environmental systems.
This paper presents an algorithm called surfseek for selecting surfaces on the most visible features in direct volume rendering (DVR). The algorithm is based on a previously published technique (WYSIWYP) for picking 3D locations in DVR. The new algorithm projects a surface patch on the DVR image, consisting of multiple rays. For each ray the algorithm uses WYSIWYP or a variant of it to find the candidates for the most visible locations along the ray. Using these candidates the algorithm constructs a graph and computes a minimum cut on this graph. The minimum cut represents a visible and typically rather smooth surface. In the last step the selected surface is displayed. We provide examples for results using artificially generated and real-world data sets.
Hunterian ligation affecting hemodynamics in vessels was proposed to avoid rebleeding in a case of a fenestrated basilar artery aneurysm after incomplete coil occlusion. We studied the hemodynamics in vitro to predict the hemodynamic changes near the aneurysm remnant caused by Hunterian ligation. A transparent model was fabricated based on three-dimensional rotational angiography imaging. Arteries were segmented and reconstructed. Pulsatile flow in the artery segments near the partially occluded (coiled) aneurysm was investigated by means of particle image velocimetry. The hemodynamic situation was investigated before and after Hunterian ligation of either the left or the right vertebral artery (LVA/RVA). Since post-ligation flow rate in the basilar artery was unknown, reduced and retained flow rates were simulated for both ligation options. Flow in the RVA and in the corresponding fenestra vessel is characterized by a vortex at the vertebrobasilar junction, whereas the LVA exhibits undisturbed laminar flow. Both options (RVA or LVA ligation) cause a significant flow reduction near the aneurysm remnant with a retained flow rate. The impact of RVA ligation is, however, significantly higher. This in vitro case study shows that flow reduction near the aneurysm remnant can be achieved by Hunterian ligation and that this effect depends largely on the selection of the ligated vessel. Thus the ability of the proposed in vitro pipe-line to improve hemodynamic impact of the proposed therapy was successfully proved.
The most popular molecular surface in molecular visualization is the solvent excluded surface (SES). It provides information about the accessibility of a biomolecule for a solvent molecule that is geometrically approximated by a sphere. During a period of almost four decades, the SES has served for many purposes – including visualization, analysis of molecular interactions and the
study of cavities in molecular structures. However, if one is interested in the surface that is accessible to a molecule whose shape differs significantly from a sphere, a different concept is necessary. To address this problem, we generalize the definition of the SES by replacing the probe sphere with the full geometry of the ligand defined by the arrangement of its van der Waals spheres. We call
the new surface ligand excluded surface (LES) and present an efficient, grid-based algorithm for its computation. Furthermore, we show that this algorithm can also be used to compute molecular cavities that could host the ligand molecule. We provide a detailed description of its implementation on CPU and GPU. Furthermore, we present a performance and convergence analysis and compare the LES for several molecules, using as ligands either water or small organic molecules.
The most popular molecular surface in molecular visualization is the solvent excluded surface (SES). It provides information about the accessibility of a biomolecule for a solvent molecule that is geometrically approximated by a sphere. During a period of almost four decades, the SES has served for many purposes – including
visualization, analysis of molecular interactions and the study of cavities in molecular structures. However, if one is interested in the surface that is accessible to a molecule whose shape differs significantly from a sphere, a different concept is necessary. To address this problem, we generalize the definition of the SES by replacing the probe sphere with the full geometry of the ligand defined by the arrangement of its van der Waals spheres. We call the new surface ligand excluded surface (LES) and present an efficient, grid-based algorithm for its computation.
Furthermore, we show that this algorithm can also be used to compute molecular cavities that could host the ligand molecule. We provide a detailed description of its implementation on CPU and GPU. Furthermore, we present a performance and convergence analysis and compare the LES for several molecules, using as ligands either water or small organic molecules.
This paper presents an algorithm called surfseek for selecting surfaces
on the most visible features in direct volume rendering (DVR). The
algorithm is based on a previously published technique (WYSIWYP) for
picking 3D locations in DVR. The new algorithm projects a surface patch
on the DVR image, consisting of multiple rays. For each ray the algorithm
uses WYSIWYP or a variant of it to find the candidates for the
most visible locations along the ray. Using these candidates the algorithm
constructs a graph and computes a minimum cut on this graph. The minimum
cut represents a very visible but relatively smooth surface. In the
last step the selected surface is displayed. We provide examples for the
results in real-world dataset as well as in artificially generated datasets.
Visualization
(2014)
Neuroanatomical analysis, such as classification of cell types, depends on reliable reconstruction of large numbers of complete 3D dendrite and axon morphologies. At present, the majority of neuron reconstructions are obtained from preparations in a single tissue slice in vitro, thus suffering from cut off dendrites and, more dramatically, cut off axons. In general, axons can innervate volumes of several cubic millimeters and may reach path lengths of tens of centimeters. Thus, their complete reconstruction requires in vivo labeling, histological sectioning and imaging of large fields of view. Unfortunately, anisotropic background conditions across such large tissue volumes, as well as faintly labeled thin neurites, result in incomplete or erroneous automated tracings and even lead experts to make annotation errors during manual reconstructions. Consequently, tracing reliability renders the major bottleneck for reconstructing complete 3D neuron morphologies. Here, we present a novel set of tools, integrated into a software environment named ‘Filament Editor’, for creating reliable neuron tracings from sparsely labeled in vivo datasets. The Filament Editor allows for simultaneous visualization of complex neuronal tracings and image data in a 3D viewer, proof-editing of neuronal tracings, alignment and interconnection across sections, and morphometric analysis in relation to 3D anatomical reference structures. We illustrate the functionality of the Filament Editor on the example of in vivo labeled axons and demonstrate that for the exemplary dataset the final tracing results after proof-editing are independent of the expertise of the human operator.
An intuitive and sparse representation of the void space of porous materials supports the efficient analysis and visualization of interesting qualitative and quantitative parameters of such materials. We introduce definitions of the elements of this void space, here called pore space, based on its distance function, and present methods to extract these elements using the extremal structures of the distance function. The presented methods are implemented by an image processing pipeline that determines pore centers, pore paths and pore constrictions. These pore space elements build a graph that represents the topology of the pore space in a compact way. The representations we derive from μCT image data of realistic soil specimens enable the computation of many statistical parameters and, thus, provide a basis for further visual analysis and application-specific developments. We introduced parts of our pipeline in previous work. In this chapter, we present additional details and compare our results with the analytic computation of the pore space elements for a sphere packing in order to show the correctness of our graph computation.
We propose a novel GPU-based approach to render virtual X-ray projections of deformable tetrahedral meshes. These meshes represent the shape and the internal density distribution of a particular anatomical structure and are derived from statistical shape and intensity models (SSIMs). We apply our method to improve the geometric reconstruction of 3D anatomy (e.g.\ pelvic bone) from 2D X-ray images. For that purpose, shape and density of a tetrahedral mesh are varied and virtual X-ray projections are generated within an optimization process until the similarity between the computed virtual X-ray and the respective anatomy depicted in a given clinical X-ray is maximized. The OpenGL implementation presented in this work deforms and projects tetrahedral meshes of high resolution (200.000+ tetrahedra) at interactive rates. It generates virtual X-rays that accurately depict the density distribution of an anatomy of interest. Compared to existing methods that accumulate X-ray attenuation in deformable meshes, our novel approach significantly boosts the deformation/projection performance. The proposed projection algorithm scales better with respect to mesh resolution and complexity of the density distribution, and the combined deformation and projection on the GPU scales better with respect to the number of deformation parameters. The gain in performance allows for a larger number of cycles in the optimization process. Consequently, it reduces the risk of being stuck in a local optimum. We believe that our approach contributes in orthopedic surgery, where 3D anatomy information needs to be extracted from 2D X-rays to support surgeons in better planning joint replacements.
Volume Graphics 2007
(2007)
MathFilm Festival 2008
(2008)
EuroVis 2009
(2009)
VideoMath
(2010)