Refine
Year of publication
Document Type
- Article (97)
- In Proceedings (89)
- Book chapter (15)
- ZIB-Report (15)
- Other (13)
- In Collection (7)
- Book (3)
- Poster (3)
Is part of the Bibliography
- no (242)
Keywords
- Cultural Heritage (1)
- GPU acceleration (1)
- Gaussian Process (1)
- HCI (1)
- Interaction Techniques (1)
- Kiosk application (1)
- Optimal design of computer experiments (1)
- Picking (1)
- Procrustes analysis (1)
- Sparse kernels (1)
Institute
- Visual Data Analysis in Science and Engineering (242) (remove)
Visualization, reconstruction, and integration of neuronal structures in digital brain atlases
(2006)
Structural properties of molecules are of primary concern in many fields. This report provides a comprehensive overview on techniques that have been developed in the fields of molecular graphics and visualization with a focus on applications in structural biology. The field heavily relies on computerized geometric and visual representations of three-dimensional, complex, large and time-varying molecular structures. The report presents a taxonomy that demonstrates which areas of molecular visualization have already been extensively investigated and where the field is currently heading. It discusses visualizations for molecular structures, strategies for efficient display regarding image quality and frame rate, covers different aspects of level of detail and reviews visualizations illustrating the dynamic aspects of molecular simulation data. The survey concludes with an outlook on promising and important research topics to foster further success in the development of tools that help to reveal molecular secrets.
Connectomics is a branch of neuroscience that attempts to create a connectome, i.e., a complete map of the neuronal system and all connections between neuronal structures. This representation can be used to understand how functional brain states emerge from their underlying anatomical structures and how dysfunction and neuronal diseases arise. We review the current state-of-the-art of visualization and image processing techniques in the field of connectomics and describe a number of challenges. After a brief summary of the biological background and an overview of relevant imaging modalities, we review current techniques to extract connectivity information from image data at macro-, meso- and microscales. We also discuss data integration and neural network modeling, as well as the visualization, analysis and comparison of brain networks.
Visualization
(2014)
In this report we review and structure the branch of molecular visualization that is concerned with the visual analysis of cavities in macromolecular protein structures. First the necessary background, the domain terminology, and the goals of analytical reasoning are introduced. Based on a comprehensive collection of relevant research works, we present a novel classification for cavity detection approaches and structure them into four distinct classes: grid-based, Voronoi-based, surface-based, and probe-based methods. The subclasses are then formed by their combinations. We match these approaches with corresponding visualization technologies starting with direct 3D visualization, followed with non-spatial visualization techniques that for example abstract the interactions between structures into a relational graph, straighten the cavity of interest to see its profile in one view, or aggregate the time sequence into a single contour plot. We also discuss the current state of methods for the visual analysis of cavities in dynamic data such as molecular dynamics simulations. Finally, we give an overview of the most common tools that are actively developed and used in the structural biology and biochemistry research. Our report is concluded by an outlook on future challenges in the field.
This paper presents an algorithm called surfseek for selecting surfaces on the most visible features in direct volume rendering (DVR). The algorithm is based on a previously published technique (WYSIWYP) for picking 3D locations in DVR. The new algorithm projects a surface patch on the DVR image, consisting of multiple rays. For each ray the algorithm uses WYSIWYP or a variant of it to find the candidates for the most visible locations along the ray. Using these candidates the algorithm constructs a graph and computes a minimum cut on this graph. The minimum cut represents a visible and typically rather smooth surface. In the last step the selected surface is displayed. We provide examples for results using artificially generated and real-world data sets.
This paper presents an algorithm called surfseek for selecting surfaces
on the most visible features in direct volume rendering (DVR). The
algorithm is based on a previously published technique (WYSIWYP) for
picking 3D locations in DVR. The new algorithm projects a surface patch
on the DVR image, consisting of multiple rays. For each ray the algorithm
uses WYSIWYP or a variant of it to find the candidates for the
most visible locations along the ray. Using these candidates the algorithm
constructs a graph and computes a minimum cut on this graph. The minimum
cut represents a very visible but relatively smooth surface. In the
last step the selected surface is displayed. We provide examples for the
results in real-world dataset as well as in artificially generated datasets.
The historical importance of ancient manuscripts is unique since they provide information about the heritage of ancient cultures. Often texts are hidden in rolled or folded documents. Due to recent impro- vements in sensitivity and resolution, spectacular disclosures of rolled hidden texts were possible by X-ray tomography. However, revealing text on folded manuscripts is even more challenging. Manual unfolding is often too risky in view of the fragile condition of fragments, as it can lead to the total loss of the document. X-ray tomography allows for virtual unfolding and enables non-destructive access to hid- den texts. We have recently demonstrated the procedure and tested unfolding algorithms on a mockup sample. Here, we present results on unfolding ancient papyrus packages from the papyrus collection of the Musée du Louvre, among them objects folded along approximately orthogonal folding lines. In one of the packages, the first identification of a word was achieved, the Coptic word for “Lord”.
Statistical methods to design computer experiments usually rely on a Gaussian process (GP) surrogate model, and typically aim at selecting design points (combinations of algorithmic and model parameters) that minimize the average prediction variance, or maximize the prediction accuracy for the hyperparameters of the GP surrogate.
In many applications, experiments have a tunable precision, in the sense that one software parameter controls the tradeoff between accuracy and computing time (e.g., mesh size in FEM simulations or number of Monte-Carlo samples).
We formulate the problem of allocating a budget of computing time over a finite set of candidate points for the goals mentioned above. This is a continuous optimization problem, which is moreover convex whenever the tradeoff function accuracy vs. computing time is concave.
On the other hand, using non-concave weight functions can help to identify sparse designs. In addition, using sparse kernel approximations drastically reduce the cost per iteration of the multiplicative weights updates that can be used to solve this problem.
Purpose: To account for the impact of turbulence in blood damage modeling, a novel approach based on the generation of instantaneous flow fields from RANS simulations is proposed.
Methods: Turbulent flow in a bileaflet mechanical heart valve was simulated using RANS-based (SST k-ω) flow solver using FLUENT 14.5. The calculated Reynolds shear stress (RSS) field is transformed into a set of divergence-free random vector fields representing turbulent velocity fluctuations using procedural noise functions. To consider the random path of the blood cells, instantaneous flow fields were computed for each time step by summation of RSS-based divergence-free random and mean velocity fields. Using those instantaneous flow fields, instantaneous pathlines and corresponding point-wise instantaneous shear stresses were calculated. For a comparison, averaged pathlines based on mean velocity field and respective viscous shear stresses together with
RSS values were calculated. Finally, the blood damage index (hemolysis) was integrated along the averaged and instantaneous pathlines using a power law approach and then compared.
Results: Using RSS in blood damage modeling without a correction factor overestimates damaging stress and thus the blood damage (hemolysis). Blood damage histograms based on both presented approaches differ.
Conclusions: A novel approach to calculate blood damage without using RSS as a damaging parameter is established. The results of our numerical experiment support the hypothesis that the use of RSS as a damaging parameter should be avoided.
Molecular processes such as protein folding or ligand-receptor-binding can be understood by analyzing the free energy landscape. Those processes are often metastable, i.e. the molecular systems remain in basins around local minima of the free energy landscape, and in rare cases undergo gauche transitions between metastable states by passing saddle-points of this landscape. By discretizing the configuration space, this can be modeled as a discrete Markov process. One way to compute the transition rates between conformations of a molecular system is by utilizing Transition Path Theory and the concept of committor functions. A fundamental problem from the computational point of view is that many time-scales are involved, ranging from 10^(-14) sec for the fastest motion to 10^(-6) sec or more for conformation changes that cause biological effects.
The goal of our work is to provide a better understanding of such transitions in configuration space on various time-scales by analyzing characteristic scalar functions topologically and geometrically. We are developing suitable visualization and interaction techniques to support our analysis. For example, we are analyzing a transition rate indicator function by computing and visualizing its Reeb graph together with the sets of molecular states corresponding to maxima of the transition rate indicator function. A particular challenge is the high dimensionality of the domain which does not allow for a straightforward visualization of the function.
The computational topology approach to the analysis of the transition rate indicator functions for a molecular system allows to explore different time scales of the system by utilizing coarser or finer topological partitioning of the function. A specific goal is the development of tools for analyzing the hierarchy of these partitionings. This approach tackles the analysis of a complex and sparse dataset from a different angle than the well-known spectral analysis of Markov State Models.
This work introduces methods for analyzing the three imaging modalities delivered by Talbot-Lau grating interferometry X-ray computed tomography (TLGI-XCT). The first problem we address is providing a quick way to show a fusion of all three modal- ities. For this purpose the tri-modal transfer function widget is introduced. The widget controls a mixing function that uses the output of the transfer functions of all three modalities, allowing the user to create one customized fused image. A second problem prevalent in processing TLGI-XCT data is a lack of tools for analyzing the segmentation process of such multimodal data. We address this by providing methods for computing three types of uncertainty: From probabilistic segmentation algorithms, from the voxel neighborhoods as well as from a collection of results. We furthermore introduce a linked views interface to explore this data. The techniques are evaluated on a TLGI-XCT scan of a carbon-fiber reinforced dataset with impact damage. We show that the transfer function widget accelerates and facilitates the exploration of this dataset, while the uncertainty analysis methods give insights into how to tweak and improve segmentation algorithms for more suitable results.
A great amount of material properties is strongly influenced by dislocations, the carriers of plastic deformation. It is therefore paramount to have appropriate tools to quantify dislocation substructures with regard to their features, e.g., dislocation density, Burgers vectors or line direction. While the transmission electron microscope (TEM) has been the most widely-used equipment implemented to investigate dislocations, it usually is limited to the two-dimensional (2D) observation of three-dimensional (3D) structures. We reconstruct, visualize and quantify 3D dislocation substructure models from only two TEM images (stereo-pairs) and assess the results. The reconstruction is based on the manual interactive tracing of filiform objects on both images of the stereo-pair. The reconstruction and quantification method are demonstrated on dark field (DF) scanning (S)TEM micrographs of dislocation substructures imaged under diffraction contrast conditions. For this purpose, thick regions (> 300 nm) of TEM foils are analyzed, which are extracted from a Ni-base superalloy single crystal after high temperature creep deformation. It is shown how the method allows 3D quantification from stereo-pairs in a wide range of tilt conditions, achieving line length and orientation uncertainties of 3 % and 7°, respectively. Parameters that affect the quality of such reconstructions are discussed.
A great amount of material properties is strongly influenced by dislocations, the carriers of plastic deformation. It is therefore paramount to have appropriate tools to quantify dislocation substructures with regard to their features, e.g., dislocation density, Burgers vectors or line direction. While the transmission electron microscope (TEM) has been the most widely-used equipment implemented to investigate dislocations, it usually is limited to the two-dimensional (2D) observation of three-dimensional (3D) structures. We reconstruct, visualize and quantify 3D dislocation substructure models from only two TEM images (stereo pairs) and assess the results. The reconstruction is based on the manual interactive tracing of filiform objects on both images of the stereo pair. The reconstruction and quantification method are demonstrated on dark field (DF) scanning (S)TEM micrographs of dislocation substructures imaged under diffraction contrast conditions. For this purpose, thick regions (>300 nm) of TEM foils are analyzed, which are extracted from a Ni-base superalloy single crystal after high temperature creep deformation. It is shown how the method allows 3D quantification from stereo pairs in a wide range of tilt conditions, achieving line length and orientation uncertainties of 3% and 7°, respectively. Parameters that affect the quality of such reconstructions are discussed.
Monocrystaline Ni-base superalloys are the material of choice for first row blades in jet engine gas turbines. Using a novel visualization tool for 3D reconstruction and visualization of dislocation line segments from stereo-pairs of scanning transmission electron microscopies, the superdislocation substructures in Ni-base superalloy LEK 94 (crept to ε = 26%) are characterized. Probable scenarios are discussed, how these dislocation substructures form.