Refine
Year of publication
Document Type
- In Proceedings (165)
- Article (134)
- ZIB-Report (62)
- Book chapter (16)
- Other (14)
- In Collection (9)
- Book (3)
- Poster (3)
Keywords
- volume rendering (3)
- 3D texture mapping (2)
- DVR (2)
- visualization (2)
- 3D neural network (1)
- AMR (1)
- AMR hierarchies (1)
- AMR tree (1)
- Amira (1)
- CCTK (1)
Institute
- Visual and Data-centric Computing (358)
- Visual Data Analysis (348)
- Visual Data Analysis in Science and Engineering (242)
- Therapy Planning (76)
- Image Analysis in Biology and Materials Science (66)
- ZIB Allgemein (31)
- Numerical Mathematics (22)
- Vergleichende Visualisierung (12)
- Computational Medicine (5)
- Geometric Data Analysis and Processing (5)
A great amount of material properties is strongly influenced by dislocations, the carriers of plastic deformation. It is therefore paramount to have appropriate tools to quantify dislocation substructures with regard to their features, e.g., dislocation density, Burgers vectors or line direction. While the transmission electron microscope (TEM) has been the most widely-used equipment implemented to investigate dislocations, it usually is limited to the two-dimensional (2D) observation of three-dimensional (3D) structures. We reconstruct, visualize and quantify 3D dislocation substructure models from only two TEM images (stereo pairs) and assess the results. The reconstruction is based on the manual interactive tracing of filiform objects on both images of the stereo pair. The reconstruction and quantification method are demonstrated on dark field (DF) scanning (S)TEM micrographs of dislocation substructures imaged under diffraction contrast conditions. For this purpose, thick regions (>300 nm) of TEM foils are analyzed, which are extracted from a Ni-base superalloy single crystal after high temperature creep deformation. It is shown how the method allows 3D quantification from stereo pairs in a wide range of tilt conditions, achieving line length and orientation uncertainties of 3% and 7°, respectively. Parameters that affect the quality of such reconstructions are discussed.
A great amount of material properties is strongly influenced by dislocations, the carriers of plastic deformation. It is therefore paramount to have appropriate tools to quantify dislocation substructures with regard to their features, e.g., dislocation density, Burgers vectors or line direction. While the transmission electron microscope (TEM) has been the most widely-used equipment implemented to investigate dislocations, it usually is limited to the two-dimensional (2D) observation of three-dimensional (3D) structures. We reconstruct, visualize and quantify 3D dislocation substructure models from only two TEM images (stereo-pairs) and assess the results. The reconstruction is based on the manual interactive tracing of filiform objects on both images of the stereo-pair. The reconstruction and quantification method are demonstrated on dark field (DF) scanning (S)TEM micrographs of dislocation substructures imaged under diffraction contrast conditions. For this purpose, thick regions (> 300 nm) of TEM foils are analyzed, which are extracted from a Ni-base superalloy single crystal after high temperature creep deformation. It is shown how the method allows 3D quantification from stereo-pairs in a wide range of tilt conditions, achieving line length and orientation uncertainties of 3 % and 7°, respectively. Parameters that affect the quality of such reconstructions are discussed.
When physical unfolding/unrolling of papyri is not possible or too dangerous for preserving the precious object, tomographic approaches may be the ap- propriate alternative. Requirements are the resolution and the contrast to distinguish writing and substrate. The steps to be performed are the following: (1) Select the object of interest (archaeological arguments, cultural back- ground of the object, etc.). (2) Find the proper physical procedure, especially with respect to contrast, take the tomographic data, e.g. by absorption x-ray tomography. (3) Apply mathematical unfolding transformations to the tomographic data, in order to obtain a 2d-planar reconstruction of text.
We describe an extension of the line integral convolution method (LIC) for imaging of vector fields on arbitrary surfaces in 3D space. Previous approaches were limited to curvilinear surfaces, i.e.~surfaces which can be parametrized globally using 2D-coordinates. By contrast our method also handles the case of general, possibly multiply connected surfaces. The method works by tesselating a given surface with triangles. For each triangle local euclidean coordinates are defined and a local LIC texture is computed. No scaling or distortion is involved when mapping the texture onto the surface. The characteristic length of the texture remains constant. In order to exploit the texture hardware of modern graphics computers we have developed a tiling strategy for arranging a large number of triangular texture pieces within a single rectangular texture image. In this way texture memory is utilized optimally and even large textured surfaces can be explored interactively.
Ancient Egyptian papyri are often folded, rolled up or kept as small packages, sometimes even sealed. Physically unrolling or unfolding these packages might severely damage them. We demonstrate a way to get access to the hidden script without physical unfolding by employing computed tomography and mathematical algorithms for virtual unrolling and unfolding. Our algorithmic approaches are combined with manual interaction. This provides the necessary flexibility to enable the unfolding of even complicated and partly damaged papyrus packages. In addition, it allows us to cope with challenges posed by the structure of ancient papyrus, which is rather irregular, compared to other writing substrates like metallic foils or parchment. Unfolding of packages is done in two stages. In the first stage, we virtually invert the physical folding process step by step until the partially unfolded package is topologically equivalent to a scroll or a papyrus sheet folded only along one fold line. To minimize distortions at this stage, we apply the method of moving least squares. In the second stage, the papyrus is simply flattened, which requires the definition of a medial surface. We have applied our software framework to several papyri. In this work, we present the results of applying our approaches to mockup papyri that were either rolled or folded along perpendicular fold lines. In the case of the folded papyrus, our approach represents the first attempt to address the unfolding of such complicated folds.
Ancient Egyptian papyri are often folded, rolled up or kept as small packages, sometimes even sealed. Physically unrolling or unfolding these packages might severely damage them. We demonstrate a way to get access to the hidden script without physical unfolding by employing computed tomography and mathematical algorithms for virtual unrolling and unfolding. Our algorithmic approaches are combined with manual interaction. This provides the necessary flexibility to enable the unfolding of even complicated and partly damaged papyrus packages. In addition, it allows us to cope with challenges posed by the structure of ancient papyrus, which is rather irregular, compared to other writing substrates like metallic foils or parchment. Unfolding of packages is done in two stages. In the first stage, we virtually invert the physical folding process step by step until the partially unfolded package is topologically equivalent to a scroll or a papyrus sheet folded only along one fold line. To minimize distortions at this stage, we apply the method of moving least squares. In the second stage, the papyrus is simply flattened, which requires the definition of a medial surface. We have applied our software framework to several papyri. In this work, we present the results of applying our approaches to mockup papyri that were either rolled or folded along perpendicular fold lines. In the case of the folded papyrus, our approach represents the first attempt to address the unfolding of such complicated folds.
Geometric morphometrics plays an important role in evolutionary studies. The state-of-the-art in this field are landmark-based methods. Since the landmarks usually need to be placed manually, only a limited number of landmarks are generally used to represent the shape of an anatomical structure. As a result, shape characteristics that cannot be properly represented by small sets of landmarks are disregarded.
In this study, we present a method that is free of this limitation. The method takes into account the whole shape of an anatomical structure, which is represented as a surface, hence the term ‘surface-based morphometrics’. Correspondence between two surfaces is established by defining a partitioning of the surfaces into homologous surface patches. The first step for the generation of a surface partitioning is to place landmarks on the surface. Subsequently, the landmarks are connected by curves lying on the surface. The curves, called ‘surface paths’, might either follow specific anatomical features or they can be geodesics, that is, shortest paths on the surface. One important requirement, however, is that the resulting surface path networks are topologically equivalent across all surfaces. Once the surface path networks have been defined, the surfaces are decomposed into patches according to the path networks.
This approach has several advantages. One of them is that we can discretize the surface by as many points as desired. Thus, even fine shape details can be resolved if this is of interest for the study. Since a point discretization is used, another advantage is that well-established analysis methods for landmark-based morphometrics can be utilized. Finally, the shapes can be easily morphed into one another, thereby greatly supporting the understanding of shape changes across all considered specimens.
To show the potential of the described method for evolutionary studies of biological specimens, we applied the method to the para-basisphenoid complex of the snake genus Eirenis. By using this anatomical structure as example, we present all the steps that are necessary for surface-based morphometrics, including the segmentation of the para-basisphenoid complex from micro-CT data sets. We also show some first results using statistical analysis as well as classification methods based on the presented technique.
Dieser Bericht beschreibt die Ergebnisse eines Anwendungsprojektes, das parallel zum Aufbau des Berliner Hochgeschwindigkeitsdatennetzes (Berlin Regional Testbed) am ZIB durchgeführt wurde. Es werden allgemeine Werkzeuge und anwendungsspezifische Arbeitsumgebungen zur netzverteilten Visualisierung und Simulation vorgestellt. Die allgemeinen Werkzeuge unterstützen folgende Aufgaben: Kopplung von Simulationen auf (Hochleistungs-)Rechnern an lokale Grafikarbeitsplätze, objektorientierte und verteilte Visualisierung, Remote-Videoaufzeichnung, Bilddatenkompression und digitaler Filmschnitt. Die spezifischen Arbeitsumgebungen wurden für Aufgaben aus den Bereichen Numerische Mathematik, Astrophysik, Strukturforschung, Chemie, Polymerphysik und Strömungsmechanik entwickelt.
After a short summary on therapy planning and the underlying technologies we discuss quantitative medicine by giving a short overview on medical image data, summarizing some applications of computer based treatment planning, and outlining requirements on medical planning systems. Then we continue with a description of our medical planning system {\sf HyperPlan}. It supports typical working steps in therapy planning, like data aquisition, segmentation, grid generation, numerical simulation and optimization, accompanying these with powerful visualization and interaction techniques.
Tensor splats
(2004)
Tensor Splats
(2003)
An improved general-purpose technique for the visualization of symmetric positive definite tensor fields of rank two is described. It is based on a splatting technique that is built from tiny transparent glyph primitives which are capable to incorporate the full directional information content of a tensor. The result is an information-rich image that allows to read off the preferred directions in a tensor field at each point of a three-dimensional volume or two-dimensional surface. It is useful for analyzing slices or volumes of a three-dimensional tensor field and can be overlayed with standard volume rendering or color mapping. The application of the rendering technique is demonstrated on general relativistic data and the diffusion tensor field of a human brain.
We present visualizations of recent supercomputer simulations from numerical relativity, exploiting the progress in visualization techniques and numerical methods also from an artistic point of view. The sequences have been compiled into a video tape, showing colliding black holes, orbiting and merging neutron stars as well as collapsing gravitational waves. In this paper we give some background information and provide a glance at the presented sequences.
Large scale simulations running in metacomputing environments face the problem of efficient file I/O. For efficiency it is desirable to write data locally, distributed across the computing environment, and then to minimize data transfer, i.e.\ reduce remote file access. Both aspects require I/O approaches which differ from existing paradigms. For the data output of distributed simulations, one wants to use fast local parallel I/O for all participating nodes, producing a single distributed logical file, while keeping changes to the simulation code as small as possible. For reading the data file as in postprocessing and file based visualization, one wants to have efficient partial access to remote and distributed files, using a global naming scheme and efficient data caching, and again keeping the changes to the postprocessing code small. However, all available software solutions require the entire data to be staged locally (involving possible data recombination and conversion), or suffer from the performance problems of remote or distributed file systems. In this paper we show how to interface the HDF5 I/O library via its flexible Virtual File Driver layer to the Globus Data Grid. We show, that combining these two toolkits in a suitable way provides us with a new I/O framework, which allows efficient, secure, distributed and parallel file I/O in a metacomputing environment.
The Monte Carlo simulation of the dynamics of complex molecules produces trajectories with a large number of different configurations to sample configuration space. It is expected that these configurations can be classified into a small number of conformations representing essential changes in the shape of the molecule. We present a method to visualize these conformations by point sets in the plane based on a geometrical distance measure between individual configurations. It turns out that different conformations appear as well-separated point sets. The method is further improved by performing a cluster analysis of the data set. The point-cluster representation is used to control a three-dimensional molecule viewer application to show individual configurations and conformational changes. The extraction of essential coordinates and visualization of molecular shape is discussed.
Recent advances in connectomics research enable the acquisition of increasing amounts of data about the connectivity patterns of neurons. How can we use this wealth of data to efficiently derive and test hypotheses about the principles underlying these patterns? A common approach is to simulate neural networks using a hypothesized wiring rule in a generative model and to compare the resulting synthetic data with empirical data. However, most wiring rules have at least some free parameters and identifying parameters that reproduce empirical data can be challenging as it often requires manual parameter tuning. Here, we propose to use simulation-based Bayesian inference (SBI) to address this challenge. Rather than optimizing a single rule to fit the empirical data, SBI considers many parametrizations of a wiring rule and performs Bayesian inference to identify the parameters that are compatible with the data. It uses simulated data from multiple candidate wiring rules and relies on machine learning methods to estimate a probability distribution (the `posterior distribution over rule parameters conditioned on the data') that characterizes all data-compatible rules. We demonstrate how to apply SBI in connectomics by inferring the parameters of wiring rules in an in silico model of the rat barrel cortex, given in vivo connectivity measurements. SBI identifies a wide range of wiring rule parameters that reproduce the measurements. We show how access to the posterior distribution over all data-compatible parameters allows us to analyze their relationship, revealing biologically plausible parameter interactions and enabling experimentally testable predictions. We further show how SBI can be applied to wiring rules at different spatial scales to quantitatively rule out invalid wiring hypotheses. Our approach is applicable to a wide range of generative models used in connectomics, providing a quantitative and efficient way to constrain model parameters with empirical connectivity data.
Recent advances in connectomics research enable the acquisition of increasing amounts of data about the connectivity patterns of neurons. How can we use this wealth of data to efficiently derive and test hypotheses about the principles underlying these patterns? A common approach is to simulate neuronal networks using a hypothesized wiring rule in a generative model and to compare the resulting synthetic data with empirical data. However, most wiring rules have at least some free parameters, and identifying parameters that reproduce empirical data can be challenging as it often requires manual parameter tuning. Here, we propose to use simulation-based Bayesian inference (SBI) to address this challenge. Rather than optimizing a fixed wiring rule to fit the empirical data, SBI considers many parametrizations of a rule and performs Bayesian inference to identify the parameters that are compatible with the data. It uses simulated data from multiple candidate wiring rule parameters and relies on machine learning methods to estimate a probability distribution (the 'posterior distribution over parameters conditioned on the data') that characterizes all data-compatible parameters. We demonstrate how to apply SBI in computational connectomics by inferring the parameters of wiring rules in an in silico model of the rat barrel cortex, given in vivo connectivity measurements. SBI identifies a wide range of wiring rule parameters that reproduce the measurements. We show how access to the posterior distribution over all data-compatible parameters allows us to analyze their relationship, revealing biologically plausible parameter interactions and enabling experimentally testable predictions. We further show how SBI can be applied to wiring rules at different spatial scales to quantitatively rule out invalid wiring hypotheses. Our approach is applicable to a wide range of generative models used in connectomics, providing a quantitative and efficient way to constrain model parameters with empirical connectivity data.
Molecular processes such as protein folding or ligand-receptor-binding can be understood by analyzing the free energy landscape. Those processes are often metastable, i.e. the molecular systems remain in basins around local minima of the free energy landscape, and in rare cases undergo gauche transitions between metastable states by passing saddle-points of this landscape. By discretizing the configuration space, this can be modeled as a discrete Markov process. One way to compute the transition rates between conformations of a molecular system is by utilizing Transition Path Theory and the concept of committor functions. A fundamental problem from the computational point of view is that many time-scales are involved, ranging from 10^(-14) sec for the fastest motion to 10^(-6) sec or more for conformation changes that cause biological effects.
The goal of our work is to provide a better understanding of such transitions in configuration space on various time-scales by analyzing characteristic scalar functions topologically and geometrically. We are developing suitable visualization and interaction techniques to support our analysis. For example, we are analyzing a transition rate indicator function by computing and visualizing its Reeb graph together with the sets of molecular states corresponding to maxima of the transition rate indicator function. A particular challenge is the high dimensionality of the domain which does not allow for a straightforward visualization of the function.
The computational topology approach to the analysis of the transition rate indicator functions for a molecular system allows to explore different time scales of the system by utilizing coarser or finer topological partitioning of the function. A specific goal is the development of tools for analyzing the hierarchy of these partitionings. This approach tackles the analysis of a complex and sparse dataset from a different angle than the well-known spectral analysis of Markov State Models.
The goal of visualization is to effectively and accurately communicate data. Visualization research has often overlooked the errors and uncertainty which accompany the scientific process and describe key characteristics used to fully understand the data. The lack of these representations can be attributed, in part, to the inherent difficulty in defining, characterizing, and controlling this uncertainty, and in part, to the difficulty in including additional visual metaphors in a well designed, potent display. However, the exclusion of this information cripples the use of visualization as a decision making tool due to the fact that the display is no longer a true representation of the data. This systematic omission of uncertainty commands fundamental research within the visualization community to address, integrate, and expect uncertainty information. In this chapter, we outline sources and models of uncertainty, give an overview of the state-of-the-art, provide general guidelines, outline small exemplary applications, and finally, discuss open problems in uncertainty visualization.
We present a novel approach to turn smartphones/ tablets into tangible near-surface devices with augmented reality (AR) capability on virtually any passive surface, like desks, tables and wallboards. A low-cost optoelectronic add-on for the device back camera enables position tracking on an almost invisible printable fiducial marker grid. This approach is promising in terms of adoption potential because it can be used anywhere with existing devices, requires no dedicated hardware installations and is applicable to a broad range of real-world applications.