518 Numerische Analysis
This thesis is concerned with the investigation of a toxicological research question with the help of a mathematical model which is parameterized from experimental data. This is an interdisciplinary and fundamental but also intricate task.
The beginning of this thesis deals with the modeling of a metabolic network to investigate the cellular response dynamics to xenobiotic-induced oxidative stress, especially in view of the contributions of the pentose phosphate pathway (PPP) and the gluconate shunt. As specific application, the exposure to the toxin 3-nitrobenzanthrone (3-NBA) which is contained for instance in diesel engine exhaust is considered. Its bioactivation in cells which can be monitored via fluorescence goes hand in hand with the generation of reactive oxygen species (ROS). We identify the metabolic network which is mainly responsible for the antioxidative cellular response to oxidative stress and corresponding regulatory mechanisms adapting the metabolism to counteract detrimental oxidative stress. Such regulatory mechanisms have an important impact on the dose-effect relationship. We derive a mathematical model in form of ordinary differential equations based on enzyme kinetics for multisubstrate reactions and mass action. The model is parameterized employing literature data and data from diverse in vitro experiments such as relative fluorescence intensity measurements. Simulations of the mathematical model give insight into the cellular response dynamics to oxidative stress and reveal the importance of the gluconate shunt in the low-dose region of exposure. A local sensitivity analysis in Matlab underlines that the solution is particularly sensitive to parameters from the bioactivation of 3-NBA, the regulatory mechanisms and the rate determining reactions.
Therefore, we improve the parameter estimation for the bioactivation of 3-NBA from fluorescence intensity measurements. From statistical analysis of the raw experimental data, the conjecture arises that samples might influence each other during the measurement process. This phenomenon is known as crosstalk. We derive two mathematical models for fluorescence intensity measurements (one with crosstalk and one without crosstalk) from the experimental setup to describe the connection between fluorophore concentration and raw experimental data. For parameter estimation from raw fluorescence data it is required to identify the more plausible model first.
For this task, we focus on the Bayesian approach for model comparison and parameter inference and employ the algorithm nested sampling which has been created to approximate the evidence of a model and to generate simultaneously samples from the posterior distribution. The centerpiece of nested sampling is an integral transformation and the subsequent approximation of the transformed integral by a Monte Carlo approximation based on the uniformity assumption. The proof of the integral transformation has been sketched previously for bounded and nonnegative likelihood functions exhibiting no plateau with positive prior measure. We prove that the integral transformation is valid in more general settings allowing particularly plateaus in the likelihood function. Furthermore, we show that the uniformity assumption is violated in such settings (even though it is fulfilled in settings without plateaus). A preprocessing splitting approach is derived to overcome this difficulty and, thus, to adapt vanilla nested sampling to more general settings. It is shown that this preprocessing approach is more efficient than the established randomization strategy which simply avoids likelihood plateaus by perturbing the likelihood function.
The thesis concludes by formulating a Bayesian framework for parameter estimation from fluorescence data. Within the framework nested sampling is applied to approximate the evidences of both fluorescence models (with and without crosstalk) and to estimate simultaneously the unknown parameters. The performance of the Bayesian framework is demonstrated using artificial data corresponding to reduced (bio)chemical models.
In this thesis, we consider inverse problems that either include a randomized data acquisition process or are tackled by randomized reconstruction methods.
In the first part of this thesis, we reiterate on mathematical concepts required for randomized measurement or reconstruction processes.
This includes an overview of mathematical preliminaries, e.g., function spaces, variational calculus, and the formulation of inverse problems.
Furthermore, we consider a series of optimization methods, which we will use for the inverse problems presented later in this work.
In particular, we discuss the concepts of randomized Kaczmarz approaches and introduce a weighting of the randomization process, which includes structural properties of currently considered problems.
For the second part of this work, we discuss a novel imaging modality called Magnetorelaxometry Imaging (MRX), which uses magnetic nanoparticles as a contrast agent.
This imaging modality forms one of the main pillars of this thesis.
In the course of this section, we introduce a mathematical model, which eventually leads to the formulation of the inverse problem of MRX.
Additionally, we introduce an approximation of one part of the data acquisition sequence and formulate a mathematical foundation thereof.
In the last part of this thesis, we evaluate the introduced concepts numerically.
For this purpose we consider three imaging modalities.
The previously introduced MRX modality does not include any random processes: consequently, we apply a randomized reconstruction approach in this case.
However, due to the novelty of the modality, the system's properties are not well understood yet.
We thus use Computerized Tomography (CT) or X-Ray Tomography as reference for the proposed randomized reconstruction methods.
Before we translate these results to MRX, we first consider an imaging modality called Single Detector Imaging (SDI).
This method uses a randomized measurement process to generate a high resolution image with only a single sensor available.
Finally, we consider the numerical implementation of the MRX imaging modality.
Here, we analyze its structural properties, study its effects on the reconstruction process and evaluate the randomized reconstruction approaches for MRX.
Die Bedeutung von Daten nimmt in allen Lebensbereichen eine immer größere Rolle ein. Diese Entwicklung kann ebenso in der Produktentwicklung und Produktentstehung beobachtet werden. Die Verknüpfung der virtuellen Produktentwicklung mit der durchgängigen und ganzheitlichen Datennutzung wird als «Digital Engineering» bezeichnet. Die Umsetzung des Digital Engineering geht mit einem starken Wandel und einer Veränderung der bisherigen Rollen der beteiligten Personen und der verwendeten Werkzeuge einher. Dabei gilt es, möglichst alle zur Verfügung stehenden Daten zu nutzen und diese Daten mittels Algorithmen des Maschinellen Lernens zu verarbeiten. In der Produktentstehung existieren zahlreiche Geometriedaten (z.B. CAD Modelle oder Messdaten) oder mit einer Geometrie verknüpfte Daten (z.B. numerische Simulationen und deren Ergebnisse). Im Rahmen der vorliegenden Dissertation wurde die Methode der sphärischen Detektorflächen entwickelt, welche es ermöglicht, beliebige Geometrien in eine einheitliche numerische Matrix zu überführen. Die entwickelte Methode kann ebenfalls genutzt werden, um Informationen, die mit der Geometrie verknüpft sind, in weitere dieser einheitlichen Matrizen umzuwandeln und so Algorithmen des Maschinellen Lernens zur Verfügung zu stellen. Das entwickelte Vorgehen wird anhand von drei unterschiedlichen Anwendungsbeispielen umgesetzt und es werden alle notwendigen Teilschritte detailliert beschrieben. Dies umfasst auch die Ableitung der sogenannten «DNA einer FE-Simulation».
Liquid crystals are materials which are characterized by mesomorphic states between ordinary liquids and solid crystals. Due to their wide range of applications in fields like photonics, optics, materials science, and biophysics, a lot of research on liquid crystals has been initiated in the last decades. In this thesis, we are concerned with two-phase flows of active liquid crystals which have the ability to convert energy from the local environment into mechanical work. This activity mechanism provides possibilities to model biological phenomena like the autonomous movement of cells.
In Chapter 2, we derive a new micro-macro model for two-phase flows of active liquid crystals which consists of Navier-Stokes equations for incompressible fluids which are coupled to a Smoluchowski equation and a phase-field equation. To take into account bending properties and energetic properties of biological structures like cell membranes the underlying energetic structure of the phase-field equation consists of the Willmore energy and a penalty energy for changes of the surface area of the interface. The Smoluchowski equation describes the evolution of a configurational density. Let us emphasize that we consider regimes with a high concentration of polymers; thus, our model takes into account the pairwise interaction of polymers and effects like dissipation due to friction.
Chapter 3 is devoted to the proof of the existence of global weak solutions. We first introduce a fully discrete finite element approximation of our model which is regularized from below and above. After establishing the existence of discrete solutions to the fully discrete scheme we pass to the limit as the spatial discretization parameter and the regularization parameter from below go to zero. This yields the existence of solutions to a discrete-in-time continuous-in-space approximation of our model. Thanks to further regularity results for the macroscopic polymer number density we are able to improve the regularity of the microscopic number density. After establishing time regularity results we pass to the limit in the discrete-in-time continuous-in-space approximation as the temporal discretization parameter goes to zero and the regularization parameter from above goes to infinity to prove that the limit functions are solutions to an unregularized weak formulation of our model in two and three space dimensions.
Chapter 4 is dedicated to numerical simulations of our fully discrete finite element scheme and its implementation in the in-house framework EconDrop of the group of Prof. Dr. Günther Grün. After comparing different approaches to solve the ill-conditioned phase-field equation of sixth order we provide simulations of self-driven active liquid crystalline droplets to present the full practicability of our scheme.
Based on Onsager's variational principle, the motion of superparamagnetic nanoparticles – which are suspended in an incompressible carrier fluid and subjected to an external magnetic field – is described by systems of partial differential quations. Various modeling assumptions lead to three different systems denoted by "model GW", "model W" and "model B". When proposed in our joint work (2019), "model GW" was – to the best of our knowledge – the first one to include evolution equations for the magnetization field and for the magnetic particle density – all of them are nonlinearly coupled to the magnetostatic and the Navier-Stokes equations. The other two models use algebraic equations to determine the magnetization based on the linearized Langevin formula and are derived by us for the purpose of comparison. In case of "model W", we assume that the suspension yields a single-phase flow with negligible mass of magnetic nanoparticles. The other model is derived under the assumption that fluid particles and magnetic particles set up a two-phase fluid. It shows similarities to the model from Himmelsbach et al. (2017). All three models follow a two-domain approach – considering the magnetic field on a possibly larger domain compared to the fluid domain – of which we expect higher accuracy in determining the total magnetic field. The case when the two domains coincide requires some slight changes which are discussed when needed.
In contrast to other works in the mathematical literature, the boundary conditions of the magnetization equation in "model GW" are motivated by physical arguments only, not by practical aspects with respect to mathematical analysis. They entail H(div,curl)-regularity of the magnetic quantities – magnetization and (total) magnetic field – but possibly not H1-regularity. To establish existence of solutions to this model, the absence of H1-regularity is compensated by an intricate approximation procedure. For this, we give a meaning to the Kelvin force (m*nabla)h in the distributional sense. Existence of distributional global-in-time solutions is guaranteed under appropriate assumptions. The latter include nonlinear diffusion to be used in the evolution equation for the magnetic particles' density and the restriction to the two-dimensional setting. An existence result of global-in-time weak solutions to a regularized model is presented also in the three-dimensional setting.
To each of the three models, we propose an unconditionally energy stable finite element scheme. The discretization of "model GW" has already been published in our joint work (2019). Here, non-conforming finite elements are used to approximate the magnetization – similar as in a paper of Nochetto et al. (2015). For the first time, however, the second order differential operators nabla div and curl curl in the magnetization equation have been discretized by introducing the operators divh and curlh – discrete versions of div and curl. The latter are defined by duality and yield H1-conforming approximations of the divergence and curl of a vector field. By means of Schaefer's fixed point theorem, existence of discrete solutions is guaranteed for all three schemes. The existence results are independent of the discretization parameters.
The thesis concludes with simulations that serve as a proof of concept for the models and their numerical schemes. The three models are compared in the case of linear diffusion and the case of nonlinear diffusion which has been used in the analysis part of this thesis. For simplicity, most simulations are performed under the assumption that the domain of the magnetic field coincides with the fluid domain. Using "model W" – for practical reasons – the effects of multiple different external magnetic fields are examined as well as the impact of using a strictly larger domain for the magnetic field.
Atherosclerosis is a disease of the arteries which can cause a reduction or complete blockage of blood flow and may thus lead to heart attacks or strokes, two of the most common causes of death worldwide. This motivates mathematical modelling and simulation of the disease. The timescale of disease progression, driven by the growth of plaque in the artery wall over years, differs greatly from other processes which are assumed to be relevant, e.g. the shear stresses exerted by the blood flow on the artery wall, which oscillates with the heart beat every second. This prevents a direct numerical simulation of such models, since long timescales would have to be resolved with a very fine step size. The goal of this thesis is to construct approximations which are simpler to solve numerically, through the use of multiscale methods, and to prove quantitative convergence results. Motivated by a model by Yang et. al. (2015), we will investigate two simplified submodels of atherosclerosis for which a rigorous multiscale analysis is possible. The first model studies slow plaque growth coupled to fast oscillating shear stresses caused by the blood flow. Mathematically this is realized through a slow ordinary differential equation coupled to a fluid equation with rapidly oscillating boundary conditions and growth-dependent, non-cylindrical space-time domain. The second model investigates substances quickly advected through the artery but only slowly diffusing into the semi-permeable wall. It consists of a system of coupled advection-diffusion and diffusion-reaction equations. With a small parameter epsilon, which expresses the timescale separation, the behavior of the solutions to these models in the limit epsilon to 0 is investigated. Both models are singularly perturbed, meaning that their solutions converge to functions which solve a differential equation of different type. For the first model it will be shown that the solution converges with order O(epsilon) to the solution of a limit equation which averages the effect of a time-periodic fluid equation. The second model yields a limit consisting of a coupled advection and diffusion-reaction equation. The order of convergence depends on the solution regularity and the behavior of the advection field. For e.g. the stationary problem and Poiseuille flow it will be shown that the spatial L2- and H1-errors are of order O(epsilon^(1/2)), respectively O(epsilon^(1/6)), in the advection domain. Inside the wall the H1-error will be of order O(epsilon^(1/3)). The derivation of this result combines qualitative convergence theory for advection-diffusion equations in the vanishing diffusion limit with a specific trace estimate for the coupling through the permeable wall. Numerical calculations are carried out for both models. For the plaque growth the focus lies on the solution of the time-periodic Navier–Stokes equation which will be required for the limit system, an existing algorithm from the literature is improved here. Furthermore, the error of the time-discrete equation is analyzed, which quantifies and emphasizes how the errors made in the different solution steps must be balanced for efficiency. For the second model a discontinuous Galerkin discretization is proposed and the agreement between theoretical and numerical results shown.
Die vorliegende Arbeit beschäftigt sich mit der numerischen Analyse und Umsetzung eines Simulationsverfahrens für homogenisierte Mikro-Makro Modelle, wie sie beispielsweise im Rahmen der Zellbiologie vorkommen. Auf der makroskopischen Skala betrachten wir hierbei reaktiven Mehrkomponenten-Transport innerhalb des Cytosols. Die makroskopische Lösung ist mittels nichtlinearer Austauschterme in jedem Punkt des makroskopischen Gebietes an die sogenannte mikroskopische Lösung gekoppelt. Die mikroskopische Lösung stellt in jedem Punkt des makroskopischen Gebiets eine Funktion auf dem mikroskopischen Gebiet dar. Diese wird in jedem makroskopischen Punkt durch reaktiven Mehrkomponenten-Transport und nichtlineare Kopplung zur makroskopischen Lösung im jeweiligen Punkt beschrieben.
Es handelt sich also um ein Mikro-Makro Modell mit bidirektionaler Skalenkopplung. Die numerische Behandlung des Modells wird zusätzlich durch Zeitableitungen auf beiden Skalen und die in der Biologie üblicherweise große Anzahl biochemischer Spezies erschwert.
Zur numerischen Behandlung des Problems schlagen wir ein sogenanntes quadriertes finite Element (FE2) Verfahren vor. Hierbei wird das makroskopische Problem mittels einer Finiten Element Methode diskretisiert und in jedem Quadraturpunkt ein lokales mikroskopisches Problem angehängt, welches auch mittels Finiter Elemente diskretisiert wird. Wir wählen gemischte Finite Elemente nach Raviart und Thomas im Ort auf beiden Skalen. In der Zeit wählen wir die implizite Euler Methode.
Wir untersuchen die Konvergenz bezüglich der Diskretisierungsparameter in Raum und Zeit. Die bekannte Theorie für Probleme auf nur einer Längenskala wird hierbei auf allgemeine Hilberträume erweitert. Ein spezielles Augenmerk wird auf duale Fehlerschranken für den stationären und den zeitabhängigen Fall gelegt. Verwenden wir
anstatt der üblichen Räume für den einskaligen Fall Hilberträume, die einer Mehrskalenformulierung zugrunde liegen, erhalten wir sofort Konvergenzordnungsabschätzungen für die FE2-Methode. Für das Paar aus makroskopischer und mikroskopischer Lösung erhalten wir das erwartete Ergebnis: makroskopisches und mikroskopisches Problem sollten gleich fein diskretisiert werden.
Üblicherweise ist man nur an der Lösung des makroskopischen Problems interessiert. In diesem Fall können wir duale Fehlerschranken nutzen und eine verbesserte Fehlerabschätzung herleiten.
Hier ergibt sich überraschenderweise die doppelte Konvergenzgeschwindigkeit bezüglich des örtlichen Diskretisierungsparameters des mikroskopischen Problems.
Dieser Effekt lässt sich auf duale Fehlerabschätzungen zurückführen und kann verwendet werden um den Rechenaufwand einer Mehrskalensimulation drastisch zu reduzieren.
Der für die Simulation verwendete nichtlineare Löser spielt eine zentrale Rolle für die Effizienz der Simulation. Es stellt sich heraus, dass splittingbasierte Löser zwar die der Diskretisierung innewohnende Struktur in vorteilhafter Art und Weise ausnutzen können, jedoch Einschränkungen in der Zeitschrittweite mit sich bringen, die bei der Anwendung auf reale Probleme zum Ausschlusskriterium werden. Bezüglich der Zeitschrittweite stellt sich das Newtonverfahren als hervorragende Wahl heraus. Eine nähere Betrachtung der aus der Diskretisierung stammenden Struktur fördert zu Tage, dass das Schur Komplement verwendet werden kann um Vorteile in den Bereichen Speicherbedarf, Rechenzeit und Skalierung auf verteilten Systemen zu erreichen. Die quadratische Konvergenzgeschwindigkeit führt zu einer geringen Anzahl an Iterationen und macht das Newtonverfahren zur besten Wahl für den nichtlinearen Löser.
Ein häufiger Kritikpunkt an der FE2 Methode ist der Umstand, dass insbesondere für dreidimensionale Probleme schnell Gleichungssysteme entstehen, die aufgrund ihrer Größe nur noch unter enormen Rechenaufwand lösbar sind und daher nur noch auf Höchstleistungsrechnern gelöst werden können. Anhand von Beispielen werden wir zeigen, dass es durch die Kombination der vorgestellten Algorithmen und Resultate durchaus möglich ist, dreidimensionale Probleme in vertretbarer Rechenzeit auf üblichen Rechnern zu lösen.
Ferner gehen wir auf ein Beispiel aus der Zellbiologie ein. Genauer untersuchen wir den Unterschied zwischen Enzymlokalisierung im Cytoplasma und Enzymlokalisierung auf den Oberflächen der Mitochondrien. Befindet sich ein größerer Anteil der Enzyme auf den Mitochondrienoberflächen, führt dies zu einer Erhöhung der Pyruvat Produktionsrate. Dieser Effekt kann auch in der Natur beobachtet werden und trägt den Namen Metabolic Channeling.
In this thesis, problems in material and topology optimization of a linear elastic continuum
motivated by applications in additive manufacturing are studied. More precisely,
specific material characteristics arising in layered additive manufacturing are considered,
but the analysis is done in great generality to also incorporate other applications.
First, simultaneous optimization of topology and parametrized material is discussed.
Anisotropic material properties and a material with a graded, oriented microstructure
are considered as applications from additive manufacturing, as well as more abstract
parametrizations for theoretical comparisons with global lower bounds. To be able to
constrain the local change of the material grading, pointwise bounds on design variable
gradients are imposed ("slope constraints"). Furthermore, a filtering technique based on
a convolution product ("density filtering") is applied and a new estimate of the maximal
gradient of filtered design variables is derived. For the combination of these two known
regularization methods and the suggested general formulation, a novel proof for the existence
of solutions and convergence of the finite element discretization is conducted.
Second, the problem of nonlinear material parametrizations potentially leading to suboptimal
local solutions is studied. For this, an asymptotic expansion of the compliance
is used. This approximation is separable between different finite element patches, which
makes it possible to find the globally optimal material for the model. While the formula
in principle only holds for elliptic inclusions within a matrix material, numerical tests
suggest a good performance of the proposed method. Subsequently, an extension for the
simultaneous optimization of a topology is introduced and tested numerically.
Finally, a method for topology optimization with uncertainties in the material properties
such as in additive manufacturing is developed. The uncertainties are handled using a
worst-case approach, i. e. they are always distributed such that the compliance is maximally
weakened. The resulting optimization problem is a minimax problem with a large
number of variables. A relaxation of the inner maximization problem is suggested, which
allows the solution of the minimax problem through the minimization of an optimal value
function. A Tikhonov type and a barrier regularization scheme are suggested, which
render the resulting minimization problem continuously differentiable. The barrier regularization
scheme is studied in more detail, as it can be closely linked to a highly efficient
interior point approach for a precise evaluation of the optimal value function and its gradient.
Examples from additive manufacturing as well as material degradation are examined.
Lastly, the method is extended to solve the original problem without relaxation by using
a RAMP-type continuation approach from the concave to the original model.
Problems with free surfaces are ubiquitous in nature. The propagation of those surfaces is affected by various parameters, some of which are uncertain. Such effects can be interpreted as random noise and may lead to probabilistic terms in modeling [13, 15, 27, 34]. The scope of this thesis is to analyze the impact of stochastic terms in the Ito-sense on the propagation of solutions. In particular, we concentrate on porous-medium equations and parabolic p-Laplace equations.
Concerning stochastic porous-medium equations, we derive upper bounds on average waiting times and criteria for instantaneous propagation. Moreover, we present numerical experiments on average propagation.
Regarding stochastic degenerate-parabolic p-Laplace equations, we prove finite speed of propagation as well as sufficient conditions for the pathwise occurrence of waiting times. Afterwards, we derive a finite-element scheme for stochastic degenerate-parabolic p-Laplace equations that is nonnegativity preserving and also convergent. Finally, the quantitative propagation of solutions subjected to noise is investigated by means of numerical simulations.
The discontinuous Galerkin method for free surface and subsurface flows in geophysical applications
(2020)
Free surface flows and subsurface flows appear in a broad range of geophysical applications and in many environmental settings situations arise which even require the coupling of free surface and subsurface flows. Many of these application scenarios are characterized by large domain sizes and long simulation times. Hence, they need considerable amounts of computational work to achieve accurate solutions and the use of efficient algorithms and high performance computing resources to obtain results within a reasonable time frame is mandatory.
Discontinuous Galerkin methods are a class of numerical methods for solving differential equations that share characteristics with methods from the finite volume and finite element frameworks. They feature high approximation orders, offer a large degree of flexibility, and are well-suited for parallel computing.
This thesis consists of eight articles and an extended summary that describe the application of discontinuous Galerkin methods to mathematical models including free surface and subsurface flow scenarios with a strong focus on computational aspects. It covers discretization and implementation aspects, the parallelization of the method, and discrete stability analysis of the coupled model.
Transparent and conductive thin films find broad application in optoelectronic devices such as touchscreens and solar panels, and are intended to satisfy two opposite properties.
On the one hand, these films should appear transparent in the visible light, which means that a large amount of light can pass through such a film.
On the other hand, electric energy induced by an applied voltage should be transported with a low electric resistance.
Besides the transparent and conductive thin films, we want to consider particle monolayers as another class of photonic nanostructures.
The particle monolayers are utilized, for instance, to control the diffuse scattering behavior of photodetectors used in solar cells.
This optical property is quantified by the haze factor.
Experiments show that the design, which includes both the material composition and the overall shape of the photonic nanostructures, has a noteworthy influence on the performance with respect to the intended purpose.
The main objective of this thesis is to optimize the design of such photonic nanostructures with respect to transmission, conductivity and haze factor by changing the material, the shape and
the geometry using gradient-based algorithms.
Before the individual optimization problems are specified, analytical and numerical solution methods for the involved partial differential equations to determine the optical and electrical properties are discussed.
The electromagnetic scattering of a single spherical particle and assemblies of spherical particles is formulated in terms of fundamental solutions of Maxwell's equations, i. e. the vector spherical wave functions.
In this context, the order of convergence of dedicated errors is numerically studied with respect to various parameters.
In particular, the numerical evaluation of the haze factor for particle monolayers consisting of non-spherical particle is challenging and
a suitable numerical solution scheme has been developed.
For this purpose, the Finite Element Method and a spectral method based on vector spherical wave functions are combined to a two-stage hybrid simulation scheme in which computationally expensive tasks can be computed in a so-called offline stage.
Hence, sophisticated algorithms for the optimization of material, shape and geometry accomplish the gradient-based design optimization of photonic nanostructures.
This thesis deals with mathematical modeling, analysis, and numerical realization of microaggregates in soils. These microaggregates have the size of a few hundred micrometers and can be understood as the fundamental building units of soil. Thus, understanding their dynamically evolving, three-dimensional structure is crucial for modeling and interpreting many soil parameters such as diffusivities and flow paths that come into play in CO2-sequestration or oil recovery scenarios.
Among others, the following aspects of the formation of microaggregates should be incorporated into a mathematical model and investigated in more detail: the spatial heterogeneity of the temporally evolving structure of microaggregates and the different processes that take place on different scales — temporal and spatial — within the so-called micro-scale itself. This work aims at formulating a process-based pore-scale model, where all chemical species are measured in concentrations. That is, we have a continuous model for reactive transport mainly in terms of partial differential equations (PDEs) with algebraic constraints. This continuous model is defined on a discrete and discretely moving domain whose geometry changes according to the rules of a cellular automaton method (CAM). These rules describe the restructuring of the porous matrix, growth and decay of biomass, and the resulting topological changes of a wetting fluid and a gas phase. The cellular automaton rules additionally imply stochastic aspects that are important on the pore-scale.
Moreover, effects and knowledge deduced from the model are transfered to scales which are more relevant for applications. The quality of these averaged models is of general interest, since simulations for the field-scale that resolve the pore-scale are not applicable for economical reasons. Thus, this book compares parameterizations of diffusivities with mathematically rigorous results and gives suggestions to improve the formulas that can be found in the literature.
The discrete movement of the microaggregates’ geometry at the micro-scale poses mathematical problems. The following question arises: Can the averaged quantities deduced from the pore-scale really be used for models on other scales or are the impacts of the artificial temporal jumps too detrimental for the solutions on other scales to be accurate? In the following, this problem is also dealt with, and the reliability of the obtained parameters is underlined.
Last but not least, it is imperative to apply a proper numerical method to implement the model in silico. The local discontinuous Galerkin (LDG) method seems to be suitable for this task, since it is locally mass-conservative and is stable for discontinuous data — that might, for example, originate from the discrete movement of the geometry or from the sharp boundaries between the different phases. Additionally, this method has no problems with complicated transfer conditions. These aspects are demonstrated in a mathematically rigorous way, and the method is improved upon by reducing the linear system of equations resulting from the discretization. This is a real enhancement, since it does not diminish the order of convergence but decreases the computational costs.
This thesis is dedicated to the analysis, control and optimization for switched systems of abstract differential equations. A main focus lies on the special case of semilinear hyperbolic systems, including models describing the gas flow in pipe networks. First, a hierarchy of such models is introduced, together with the necessary graph-theoretical basics for the extension of these models to networks. We show how the models can be presented in a uniform formulation as semilinear hyperbolic initial boundary value problems. For systems of this kind a comprehensive solution theory is then developed, we prove the existence and uniqueness of solutions as well as their behavior, if the initial and boundary values show jumps. Furthermore, we consider the temporal switching between several such systems and prove a general result for the well-posedness of feedback-controlled switching processes. The example of gas networks with active elements such as valves, check valves and compressors demonstrates the achieved results. In a more abstract framework, we formulate an optimization problem for switched systems of evolution equations with strongly continuous semigroups. We specify optimization criteria and formulate an adjoint-based calculus that allows an efficient evaluation of gradients. The results are embedded in an alternating-direction-method, to which we present suitable convergence concepts. In addition to other applications in the field of ordinary, delayed and partial differential equations, we again cover the example of the optimization of gas networks. To this end, the numerical implementation is discussed, especially methods and schemes used for the simulation and optimization of gas networks. We present two application examples showing the suitability of our methods for the optimal switching of active elements in gas networks as well as optimized model selection balancing accuracy with computational effort.
In this thesis, we consider model reduction for parameter dependent parabolic PDEs defined on networks with variable composition. For this type of problem, the Reduced Basis Element Method (RBEM), developed by Maday and Rønquist, is a reasonable choice as a solution on the entire domain is not required. The reduction method is based on the idea of constructing a reduced basis for every individual component and coupling the reduced elements using a mortar-like method. However, this decomposition procedure can lead to difficulties, especially for networks consisting of numerous edges. Due to the variable composition of the networks, the solution on the interfaces is extremely difficult to predict. This can lead to unsuitable basis functions and poor approximations of the global solutions.
On the basis of networks consisting of one-dimensional domains, we present an extension of the RBEM which remedies this problem and provides a good basis representation for each individual edge. Essentially this extension makes use of a splinebased boundary parametrization in the local basis construction. To substantiate the approximation properties of the basis representation onto the global solution, we develop an error estimate for local basis construction with Proper Orthogonal Decomposition (POD) or POD-Greedy. Additionally, we provide existence, uniqueness and regularity results for parabolic PDEs on networks with one-dimensional domains, which are essential for the error analysis.
Finally, we illustrate our method with three examples. The first corresponds to the theory presented and shows two different networks of one-dimensional heat equations with varying thermal conductivity. The second and third problem demonstrates the extensibility of the method to component based domains in two dimensions or nonlinear PDEs. These were parts of the research project Life-cycle oriented optimization for a resource and energy efficient infrastructure, funded by the German Federal Ministry of Education and Research.
Stabilization Techniques for the Finite Element Method Applied to Advection Dominated Problems
(2018)
This thesis in the mathematical field of numerics of partial differential equations deals with different stabilization techniques of finite element discretizations of advection-dominated problems. The underlying advection–diffusion equation is used, e.g., in hydrogeology, where it describes the transport of groundwater-dissolved matter through the porous soil matrix at an averaged scale. The transport is caused by the actual groundwater flow ("advection") as well as by Brownian particle motion and grain-structure-related dispersion ("diffusion"). Discretizing the advection–diffusion equation in space using the classical finite element method (FEM), unphysical oscillations occur if advection dominates diffusion. The quotient of these two quantities defines the Peclet number. For high Peclet numbers, the numerical solution also attains negative concentration values.
Three different stabilization techniques are considered in this thesis: The first one uses a finite volume discretization for the advective part. For equations of hyperbolic character, finite volume methods are known to have better properties than the FEM. The second method is the streamline diffusion method (streamline upwind Petrov–Galerkin, SUPG) for polynomial degrees one and two. Finally, a not yet widely-established method is considered: Algebraic flux correction (AFC). AFC limits the mass flux between degrees of freedom such that a given prescribed principle is preserved. For example, one can guarantee the discrete non-negativity of concentrations using the AFC method. This is desirable as negative concentrations are physically not meaningful. AFC is formulated on the algebraic level, not on the variational one, and introduces additional non-linearities. Those can be handled by a fixed-point iteration or an appropriate linearization. In any case, AFC turns out to have the highest computational costs compared to the other two stabilization methods. While one can apply SUGP for arbitrary polynomial degrees, the finite volume stabilization and AFC in the standard "linear mass lumping" version can only be applied using first-order polynomials.
Ziel dieser Arbeit ist die effiziente L\"osung hochdimensionaler elliptischer partieller Differentialgleichungen auf d\"unnen Gittern.
Bei Diskretisierung wird der Galerkin-Ansatz und die Finite-Elemente-Methode verwendet.
Eine effiziente Implementierung der Matrix-Vektor-Multiplikation skaliert linear mit der Anzahl der Gitterpunkte.
Dazu wird ein Algorithmus vorgestellt, der durch eine Kombination von Restriktionen und Prolongationen die hierarchischen \"Ubersch\"usse aller Gitter einsammelt und verteilt.
F\"ur d\"unne Gitter muss allerdings auf den Austausch zwischen einigen Gittern verzichtet werden.
Dies hat aufgrund der Semi-Orthogonalit\"at keinen Einfluss auf die Konvergenz, da Prewavelets f\"ur \"uberlappende Gebiete $L^2$-orthogonal sind.
Bei variablen Koeffizienten zeigt ein Konvergenzbeweis, dass diese Werte sehr klein sind und keinen Einfluss auf die Konvergenz haben.
Die Konvergenzordnung der D\"unngitterdiskretisierung reduziert sich im Vergleich zu vollen Gittern nur leicht, w\"ahrend die Anzahl der Unbekannten dramatisch abnimmt.
Numerische Ergebnisse mit variablen Koeffizienten zeigen optimale Konvergenz f\"ur dreidimensionale und sechsdimensionale Probleme.
Transformationen f\"ur krummlinig umrandete Gebiete besitzen variable Koeffizienten und beruhen nicht auf einem Tensorprodukt.
Die Berechnung der Steifigkeitsmatrix verwendet eine hochdimensionale numerische Integration.
Multiplikationen werden durch Rekursion auf eindimensionale Operatoren zur\"uckgef\"uhrt.
Dazu ist im Rahmen dieser Arbeit eine umfangreiche Softwarebibliothek entstanden.
Durch die Verwendung von Templates kann die Implementierung auf beliebige hochdimensionale Probleme angewendet werden.
Parallelisierung sowohl f\"ur geteilten Speicher als auch verteilte Systeme gew\"ahrleistet eine hohe Genauigkeit f\"ur gro\ss{}e Dimensionen.
Ein Ansatz mit semi-adaptiver Verfeinerung erm\"oglicht partielle Verfeinerung bei gleichzeitigem Erhalt der Tensorprodukt-Struktur.
Die Umsetzung adaptiver d\"unner Gitter wird als Ausblick behandelt.
Nonlocal balance laws are nonlinear partial integro-differential equations that play a major role in the modeling of real world phenomena.
From the description of ripening processes in nanoparticle synthesis up to macroscopic modeling of vehicular traffic flow these equations are of crucial importance.
In the presented work a general formulation of one-dimensional nonlocal balance laws was analyzed with respect to existence, uniqueness and regularity of weak solutions.
The Kruzkov entropy condition is in the context of balance laws widely used to obtain uniqueness of weak solutions. The presented work shows that the entropy condition is obsolete in the discussed class of nonlocal balance laws.
An analytical representation of the weak solution is derived based on the method of characteristics and a fix-point mapping in the function space. The solvability of the fix point mapping for small times is proven by Banach's fixed-point theorem. A time-horizon where the weak solution exists is determined by clustering of these small times steps. The gained result is shown to be sharp in special cases.
Commonly weak solutions to nonlocal balance laws were approximated numerically using problem specific finite volume schemes. To get rid of the inherent numerical dissipation
an alternative numerical scheme is deduced based on the analytical representation of the weak solution. Therefore, a semi-discretization based on the method of characteristics and a piecewise constant representation of the solution is introduced and analyzed with respect to convergence.
In dependence on the global as well as piecewise regularity of the data a priori error estimates are derived. Therefore the whole spectrum from inital data of bounded variation up to Lipschitz-continuous initial data were analyzed.
The presented numerical scheme and parts of the analytical weak solution are applied to examples from the modeling of traffic flow and the nanoparticle synthesis. Thereby the high accuracy of the presented scheme is discussed in comparison to published simulation results. Through the introduced numerical method, discontinuities of the initial data are tracked over time and will not be smoothed, thus the character of the solution is represented more accurately. In the context of nanoparticle synthesis, optimal process conditions which lead to particle size distributions with small dispersity were determined exemplarily.
This thesis presents the three-dimensional modeling, discretization, implementation, and simulation of additive manufacturing processes on the example of electron beam melting (EBM). The fluid dynamics of the liquified melting pool are modeled by the incompressible Navier-Stokes equations and the incorporation of energy by the heat equation. The applied numerical scheme is a thermal multi-distribution lattice Boltzmann method (LBM) allowing an efficient parallel implementation. The liquid phase of the melting pool and the gas phase of the atmosphere are separated by the free surface lattice Boltzmann method (FSLBM) that does not compute the dynamics of the gas phase explicitly but sets a boundary condition at the interface. Furthermore, the electron beam gun and the metal powder particles are explicitly modeled. A realistic particle size distribution is achieved by using an inverse Gaussian distribution. For the absorption of energy two different algorithms are derived depending on the acceleration voltage of the electron beam.
Most of the EBM specific algorithms are embedded in the highly parallel lattice Boltzmann framework waLBerla. The metal powder particles are simulated by the also highly parallel physics engine pe.
Within the coupling particles are represented as rigid bodies in the pe and treated as boundaries in the LBM scheme of waLBerla. Both frameworks work on state-of-the-art supercomputers. The EBM application and its implementation are validated by benchmarks where analytical solutions are common knowledge. Moreover, the simulation results are compared to experimental data with respect to quality of the product in order to avoid porosity, and ensure dimensional accuracy. Since the numerical and experimental data are highly concordant the implemented EBM model is suitable to develop new processing strategies in order to improve the quality of the products. The simulations support machine users and developers in order to find an optimal parameter set for specific parts.
Lastly, the accuracy order of the applied free surface boundary condition is examined via the Chapman-Enskog expansion since it has a huge influence on the simulation results. It is established that the original FSLBM boundary condition is just first order accurate for general cases and since the LBM is second order accurate the overall accuracy is reduced by applying FSLBM. In order to overcome this deficiency an improved second order FSLBM boundary condition is derived successfully. The importance and correctness of this new FSLBM boundary condition is finally underlined by a thorough validation against analytical
calculations and experiments.
The increasing interest in microfluidic applications in recent years resulted in a constantly growing demand for computational fluid dynamics on micro-/mesoscopic scales. The corresponding methods have evolved from a topic mainly of relevance for fundamental research to a crucial tool for engineering
applications. The physically correct modeling of the boundary interactions is critical on these scales, as the influence of boundary effects increases inversely proportionally with the characteristic length. Mesoscopic particle-based approaches, that is, methods where the fluid is modeled using discrete particles, are able to capture these effects correctly. The applicability of many existing simulation tools, however, is rather limited with respect to systems of complicated shape. Yet such shapes are necessary for
microfluidic devices in order to perform, for example, the continuous sorting of cells. Furthermore, as the computational cost of mesoscopic particle-based methods is high, the simulation tools have to be optimized for modern high-performance computing systems and be able to exploit the parallelism of today's supercomputers.
The aim of this work is to develop methods and algorithms required for a general, yet efficient simulation framework able to handle boundary conditions of complicated shape. The description of the computational domain using unstructured grids, combined with additional obstacles defined by combinations of geometric primitives is suggested. This hybrid approach combines the
universality of unstructured grids constructed by applying conventional meshing tools with the fully analytical description of boundary surfaces. This analytical description aligns conceptionally with the particle-based modeling of fluids and completely avoids the necessity to perform a discretization of the boundary. We devise an efficient and numerically robust algorithm for the
maintenance of neighbor lists in such complicated geometries. Various physically motivated boundary conditions are tightly integrated into this algorithm, allowing for the simulation of fluid flows in complex geometries. A thorough validation using standard test cases is performed to ensure the correct working of the implemented particle models. Several example applications for flows through complicated geometries are presented, such as
the flow around a particle cluster and the flow through a highly porous medium.
Using the unstructured grid as a basis, a fully object-oriented simulation framework is developed, which is easily extendible by separating the physical models from the data handling and parallelization scheme. The efficiency of the proposed parallelization scheme is assessed via benchmark runs on different
machines, highlighting the versatility and quality of the simulation tool.
This thesis is concerned with the design and analysis of an interface fitted finite element strategy
for the simulation of free- and moving boundary problems with sharp interfaces. The presented
strategy falls into the class of interface tracking methods based on the Arbitrary Lagrangian-Eulerian (ALE) formulation. The main goal is to resolve the fundamental drawback of mesh
degeneration typical of moving mesh strategies while preserving all other benefits, in particular a
well-defined, accurate, and explicit representation of the moving geometry in space and time. This
representation allows for a straight-forward definition and implementation of problem tailored finite element spaces.
Parametric finite element spaces based on curved elements serve as the baseline of the presented
approach and are introduced and discussed in the context of a prototypical elliptic interface problem. A solution to the associated problem of generating admissible, interface fitted parametrizations is presented in terms of a variational mesh optimization approach. In this approach, a mesh
quality functional subject to an alignment constraint is minimized which leads to interface aligned,
optimal, and non-degenerate parametrizations.
For time-dependent problems, interface aligned parametrizations are constructed such that they
behave smoothly in time within each time slab, while a re-parametrization taking place in between
two successive time slabs is permitted. This strategy yields parametric finite element spaces that
are discontinuous in time. Consequently, a discontinuous Galerkin (dG(k)) approach in time is
used to construct fully discrete schemes. For the lowest order variant dG(0), an a priori error
analysis is conducted to show that the proposed strategy leads to convergence rates that are sub-
optimal by one order with respect to space in the worst case. A strategy to solve arising space-time
systems for higher-order dG(k) methods is proposed. This strategy is based on a transformation of
the space-time system into real block diagonal form, such that an efficient, preconditioned Schur
complement formulation for the arising 2 × 2 blocks can be used. It is shown that the preconditioned system possesses a condition number that is uniformly bounded by 2.
The last part of this thesis addresses the application of the strategy to free boundary problems. In
the context of a general flow problem in the presence of an interface force, governing equations
are introduced, a fully discrete space-time formulation is derived and an efficient first order in
time method is proposed. Two different application scenarios, a two-phase flow example with
surface tension, and a fluid-structure interaction problem are considered to validate and evaluate
the proposed approach, and to show its benefits and limitations.