Refine
Year of publication
- 2016 (14) (remove)
Document Type
- Article (6)
- ZIB-Report (4)
- In Proceedings (3)
- Poster (1)
Language
- English (14)
Is part of the Bibliography
- no (14)
Keywords
Institute
A framework is proposed for extracting features in 2D transient flows, based on the acceleration field to ensure Galilean invariance. The minima of the acceleration magnitude, i.e. a superset of the acceleration zeros, are extracted and discriminated into vortices and saddle points --- based on the spectral properties of the velocity Jacobian. The extraction of topological features is performed with purely combinatorial algorithms from discrete computational topology. The feature points are prioritized with persistence, as a physically meaningful importance measure. These features are tracked in time with a robust algorithm for tracking features. Thus a space-time hierarchy of the minima is built and vortex merging events are detected. The acceleration feature extraction strategy is applied to three two-dimensional shear flows:
(1) an incompressible periodic cylinder wake,
(2) an incompressible planar mixing layer and
(3) a weakly compressible planar jet.
The vortex-like acceleration feature points are shown to be well aligned with acceleration zeros, maxima of the vorticity magnitude, minima of pressure field and minima of λ2.
Traditionally, Lagrangian fields such as finite-time Lyapunov exponents (FTLE)
are precomputed on a discrete grid and are ray casted afterwards. This, however,
introduces both grid discretization errors and sampling errors during ray marching.
In this work, we apply a progressive, view-dependent Monte Carlo-based approach
for the visualization of such Lagrangian fields in time-dependent flows. Our ap-
proach avoids grid discretization and ray marching errors completely, is consistent,
and has a low memory consumption. The system provides noisy previews that con-
verge over time to an accurate high-quality visualization. Compared to traditional
approaches, the proposed system avoids explicitly predefined fieldline seeding
structures, and uses a Monte Carlo sampling strategy named Woodcock tracking
to distribute samples along the view ray. An acceleration of this sampling strategy
requires local upper bounds for the FTLE values, which we progressively acquire
during the rendering. Our approach is tailored for high-quality visualizations of
complex FTLE fields and is guaranteed to faithfully represent detailed ridge surface
structures as indicators for Lagrangian coherent structures (LCS). We demonstrate
the effectiveness of our approach by using a set of analytic test cases and real-world numerical simulations.
ORBKIT is a toolbox for postprocessing electronic structure calculations based on a highly modular and portable Python architecture. The program allows computing a multitude of electronic properties of molecular systems on arbitrary spatial grids from the basis set representation of its electronic wave function, as well as several grid-independent properties. The required data can be extracted directly from the standard output of a large number of quantum chemistry programs. ORBKIT can be used as a standalone program to determine standard quantities, for example, the electron density, molecular orbitals, and derivatives thereof. The cornerstone of ORBKIT is its modular structure. The existing basic functions can be arranged in an individual way and can be easily extended by user-written modules to determine any other derived quantity. ORBKIT offers multiple output formats that can be processed by common visualization tools (VMD, Molden, etc.). Additionally, ORBKIT offers routines to order molecular orbitals computed at different nuclear configurations according to their electronic character and to interpolate the wavefunction between these configurations. The program is open-source under GNU-LGPLv3 license and freely available at https://github.com/orbkit/orbkit/.
This article provides an overview of ORBKIT with particular focus
on its capabilities and applicability, and includes several example
calculations.
To improve existing weather prediction and reanalysis capabilities, high-resolution and multi-modal climate data becomes an increasingly important topic. The advent of increasingly dense numerical simulation of atmospheric phenomena, provides new means to better understand dynamic processes and to visualize structural flow patterns that remain hidden otherwise. In the presented illustrations we demonstrate an advanced technique to visualize multiple scales of dense flow fields and Lagrangian patterns therein, simulated by state-of-the-art simulation models for each scale. They provide a deeper insight into the structural differences and patterns that occur on each scale and highlight the complexity of flow phenomena in our atmosphere.
Many scientific applications deal with data from a multitude of different sources, e.g., measurements, imaging and simulations. Each source provides an additional perspective on the phenomenon of interest, but also comes with specific limitations, e.g. regarding accuracy, spatial and temporal availability. Effectively combining and analyzing such multimodal and partially incomplete data of limited accuracy in an integrated way is challenging. In this work, we outline an approach for an integrated analysis and visualization of the atmospheric impact of volcano eruptions. The data sets comprise observation and imaging data from satellites as well as results from numerical particle simulations. To analyze the clouds from the volcano eruption in the spatiotemporal domain we apply topological methods. Extremal structures reveal structures in the data that support clustering and comparison. We further discuss the robustness of those methods with respect to different properties of the data and different parameter setups. Finally we outline open challenges for the effective integrated visualization using topological methods.
Purpose: To account for the impact of turbulence in blood damage modeling, a novel approach based on the generation of instantaneous flow fields from RANS simulations is proposed.
Methods: Turbulent flow in a bileaflet mechanical heart valve was simulated using RANS-based (SST k-ω) flow solver using FLUENT 14.5. The calculated Reynolds shear stress (RSS) field is transformed into a set of divergence-free random vector fields representing turbulent velocity fluctuations using procedural noise functions. To consider the random path of the blood cells, instantaneous flow fields were computed for each time step by summation of RSS-based divergence-free random and mean velocity fields. Using those instantaneous flow fields, instantaneous pathlines and corresponding point-wise instantaneous shear stresses were calculated. For a comparison, averaged pathlines based on mean velocity field and respective viscous shear stresses together with
RSS values were calculated. Finally, the blood damage index (hemolysis) was integrated along the averaged and instantaneous pathlines using a power law approach and then compared.
Results: Using RSS in blood damage modeling without a correction factor overestimates damaging stress and thus the blood damage (hemolysis). Blood damage histograms based on both presented approaches differ.
Conclusions: A novel approach to calculate blood damage without using RSS as a damaging parameter is established. The results of our numerical experiment support the hypothesis that the use of RSS as a damaging parameter should be avoided.
Statistical methods to design computer experiments usually rely on a Gaussian process (GP) surrogate model, and typically aim at selecting design points (combinations of algorithmic and model parameters) that minimize the average prediction variance, or maximize the prediction accuracy for the hyperparameters of the GP surrogate.
In many applications, experiments have a tunable precision, in the sense that one software parameter controls the tradeoff between accuracy and computing time (e.g., mesh size in FEM simulations or number of Monte-Carlo samples).
We formulate the problem of allocating a budget of computing time over a finite set of candidate points for the goals mentioned above. This is a continuous optimization problem, which is moreover convex whenever the tradeoff function accuracy vs. computing time is concave.
On the other hand, using non-concave weight functions can help to identify sparse designs. In addition, using sparse kernel approximations drastically reduce the cost per iteration of the multiplicative weights updates that can be used to solve this problem.