Refine
Document Type
- Doctoral thesis (5)
- Bachelor thesis (1)
- Master thesis (1)
Has Fulltext
- yes (7)
Is part of the Bibliography
- no (7)
Keywords
- Numerisches Verfahren (2)
- Adaptive Neuronale Gebietsverfeinerung (1)
- Adaptive neural domain refinement (1)
- Agent-based simulation (1)
- Agentenbasiertes Simulationsmodell (1)
- Analyse (1)
- Analysis (1)
- Approximation hochdimensionaler PDEs (1)
- Approximation of high dimensional PDEs (1)
- Artificial neuronal networks (1)
Numerical investigation and extension of quadrature-based moment methods for population balances
(2023)
Particulate systems can be described by a number density function (NDF) with respect to a vector of internal coordinates. The evolution of the NDF is governed by the typically high-dimensional population balance equation (PBE). A common approach to reduce the dimensionality of the problem is to solve only for a set of moments instead of the NDF. The derived system of moment equations, however, includes unclosed integral terms that still contain the unknown NDF. One way to close the system of moment equations is to approximate the unclosed integral terms using a Gaussian quadrature computed from the moments. The procedure of taking a set of moments to compute a Gaussian quadrature, which is, in turn, used to close the moment equations, is known as the quadrature method of moments (QMOM). It gave rise to an entire family of methods, the quadrature-based moment methods (QBMMs), which are the primary focus of this work. The presented research can be divided into three major parts. The first part involves the formulation of a common Lagrangian droplet breakup model for QBMMs and the numerical investigation with the QMOM as well as the more sophisticated extended QMOM (EQMOM). The results indicate that the approximations are reasonably accurate when at least six moment equations are solved, with the EQMOM providing no advantages for the investigated configurations. In the second part, a quadrature-based moment model for the effects of fluid turbulence on particle velocities is formulated. The resulting moment equations contain non-smooth integrands that are the source of large errors when using common QBMMs. As an alternative, the Gauss/anti-Gauss QMOM (GaG-QMOM) is proposed that uses the average of a Gaussian and an anti-Gaussian quadrature. Numerical studies show that the GaG-QMOM is able to significantly reduce the previously observed large errors. Another novelty proposed in this context is the modification of the second-order strong-stability preserving Runge-Kutta method to guarantee the preservation of moment realizability in the presence of phase-space diffusion. The third part is concerned with the numerical exploration of the core algorithm of most QBMMs in terms of performance and accuracy. The algorithm consists of, first, computing the recurrence coefficients of the orthogonal polynomials associated with a set of moments, second, solving a symmetric tridiagonal eigenvalue problem to obtain the quadrature nodes and weights, and third, evaluating the integral terms in the moment equations. The results indicate that the contribution of the first step to compute the recurrence coefficients from moments to the overall computational costs is negligible. Instead, the primary focus should be on the fast solution of the eigenvalue problem and, possibly, on the efficient implementation of the moment source term evaluation, which becomes important when second-order processes are considered.
Agentenbasierte Modellierung und Simulation der Rettungskette : eine Fallstudie in der Lausitz
(2022)
Ein funktionierender Rettungsdienst ist für die Gesellschaft von zentraler Bedeutung, da in medizinischen Notsituationen die Gesundheit eines Verunfallten von der Qualität der praäklinischen medizinischen Versorgung abhängt. Dabei steht man vor immer neuen Herausforderungen des sozioökonomischen Wandels und technischen Fortschritts, die neue Ansätze zur Systemoptimierung erfordern. Diese Arbeit beinhaltet eine Literaturübersicht über Planungsprobleme, sowie eine Klassifizierung von Emergency Medical Service (EMS) Systemen. Ein agentenbasiertes Simulationsmodell für ein notarztbasiertes EMS-System wird vorgestellt und auf die Rettungskette eines städtischen Gebietes Cottbus, Brandenburg, Deutschland angewandt. Die Parametrisierung des Modells erfolgt durch zugrundeliegende Einsatzdaten, dabei wird ein Geoinformationssystem (GIS) genutzt, Notrufe über die Bevölkerungsdichte mit einer zeitabhängigen Rate generiert und statistische Kenngrößen des Systems ausgewertet. Das in AnyLogic implementierte Simulationsmodell enthält eine graphische Nutzeroberfläche, in der Einsatzdetails und Statistiken als Grundlage für Anwendungen in der Praxis angezeigt werden.
Die Bachelorarbeit beschäftigt sich mit der Entwicklung effizienter Methoden für die Klassifizierung der Stadien von Zellen. Hierbei liegt der Fokus auf sogenannten Deep-Learning-Algorithmen. Diese haben sich unter anderem in der Bilderkennung als sehr leistungsfähig erwiesen und können genutzt werden, um große Mengen von mikroskopischen Zellbildern in kurzer Zeit zu klassifizieren. Aufgezeigt werden Möglichkeiten zur Optimierung solcher Algorithmen mit dem Ziel, Genauigkeit und Speichergröße zu verbessern. Es wurden hierbei der Einfluss von verschiedenen Parametern auf die Performance eines Algorithmus untersucht und gegenübergestellt, verschiedene etablierte Modelle miteinander verglichen und eine Auswahl gängiger Methoden zur Modell-Optimierung getestet. Genutzt wurde die Software-Bibliothek TensorFlow, welche über die Programmiersprache Python angesprochen wird. Reale Anwendungsdaten wurden von der Firma Medipan zur Verfügung gestellt.
Solving differential equations is still a topic of major interest, due to their appearance in many fields of science and engineering and a classic approach with neural networks builds upon trial solutions, the so-called neural forms. The latter are incorporated in a cost function that is subject to minimisation, to train the involved neural networks. Neural forms represent general and flexible tools for solving ordinary differential equations, partial differential equations as well as systems of each. However, the computational approach is in general highly dependent on a variety of computational parameters and the choice of the optimisation methods. Studying the solution of a simple but fundamental stiff ordinary differential equations with small feedforward neural networks and first order optimisation shows, that it is possible to identify preferable choices for parameters and methods. The neural network weight initialisation appears to be a sensitive topic, while having a major impact on the solution accuracy. Especially the use of non-random (deterministic) weights partially shows poor performance, but removes a stochastic component. Further research reveals, that a new polynomial representation of the neural forms can significantly increase the reliability of a deterministic initialisation (all weights have initially the same values assigned). In order to maintain smaller neural network architectures and solve the differential equation, even on fairly large domains, a new technique called domain segmentation (for initial value problems) is introduced. The solution domain splits into equidistant subdomains and the above-mentioned collocation polynomial neural forms are solved separately in each domain fragment. At the boundary of any subdomain, a new initial value is provided by the neural forms solution and directly incorporated in the adjacent one. In classic adaptive numerical methods for solving differential equations, the mesh as well as the domain may be refined or decomposed, respectively, in order to improve numerical accuracy. The subdomain distribution can also be connected with an adaptive refinement. That is, the neural network training status is combined with an adaptive subdomain size reduction in the new adaptive neural domain refinement algorithm. That is, each subdomain is reduced in size until the optimisation is resolved up to a predefined training accuracy. In addition, while the neural networks are by default small, the number of neurons may also be adjusted in an adaptive way. Conditions are introduced to automatically confirm the solution reliability and optimise computational parameters whenever it is necessary.
In this work, we consider non-reversible multi-scale stochastic processes, described by stochastic differential equations, for which we review theory on the convergence behaviour to equilibrium and mean first exit times. Relations between these time scales for non-reversible processes are established, and, by resorting to a control theoretic formulation of the large deviations action functional, even the consideration of hypo-elliptic processes is permitted. The convergence behaviour of the processes is studied in a lot of detail, in particular with respect to initial conditions and temperature. Moreover, the behaviour of the conditional and marginal distributions during the relaxation phase is monitored and discussed as we encounter unexpected behaviour. In the end, this results in the proposal of a data-based partitioning into slow and fast degrees of freedom. In addition, recently proposed techniques promising accelerated convergence to equilibrium are examined and a connection to appropriate model reduction approaches is made. For specific examples this leads to either an interesting alternative formulation of the acceleration procedure or structural insight into the acceleration mechanism. For the model order reduction technique of effective dynamics, which uses conditional expectations, error bounds for non-reversible slow-fast stochastic processes are obtained. A comparison with the reduction method of averaging is undertaken, which, for non-reversible processes, possibly yields different reduced equations. For Ornstein-Uhlenbeck processes sufficient conditions are derived for the two methods (effective dynamics and averaging) to agree in the infinite time scale separation regime. Additionally, we provide oblique projections which allow for the sampling of conditional distributions of non-reversible Ornstein-Uhlenbeck processes.
Motivated by computing functionals of high-dimensional, potentially metastable diffusion processes, this thesis studies robustness issues appearing in the numerical approximation of expectation values and their gradients. A major challenge being high variances of corresponding estimators, we investigate importance sampling of stochastic processes for improving statistical properties and provide novel nonasymptotic bounds on the relative error of corresponding estimators depending on deviations from optimality. Numerical strategies that aim to come close to those optimal sampling strategies can be encompassed in the framework of path space measures, and minimizing suitable divergences between those measures suggests a variational formulation that can be addressed in the spirit of machine learning. A key observation is that while several natural choices of divergences have the same unique minimizer, their finite sample properties differ vastly. We provide the novel log-variance divergence, which turns out to have favorable robustness properties that we investigate theoretically and apply in the context of path space measures as well as in the context of densities, for instance offering promising applications in Bayesian variational inference.
Aiming for optimal importance sampling of diffusions is (more or less) equivalent to solving Hamilton-Jacobi- Bellman PDEs and it turns out that our numerical methods can be equally applied for the approximation of rather general high-dimensional semi-linear PDEs. Motivated by stochastic representations of elliptic and parabolic boundary value problems we refine variational methods based on backward SDEs and provide the novel diffusion loss, which can be related to other state-of-the-art attempts, while offering certain numerical advantages.
Reliability assessment of pipeline failure factors has become a major concern for industrial firms involved in the transmission of oil and gas, where the requirement to limit these failure factors and their consequences, such as maintenance costs, is particularly significant. Pipelines were constructed throughout time to include safety precautions in order to offer a theoretical minimal failure rate for the pipeline's whole life. This strategy also includes the management of various failure causes as well as routine-based maintenance to assure the dependability of pipelines during their service life. However, the requirement to reduce the impact of these failure factors on pipelines has bolstered the use of reliability evaluation of failure factors on petroleum pipelines. The purpose of this study is to evaluate the reliability of oil pipelines in Nigeria, with emphasis on proposing operational and technical changes to enhance their resilience against unexpected failures. The thesis uses the regular Poisson, empirical, and compound Poisson processes to describe the number of common failure causes owing to corrosion, third-party damage, mechanically induced failure, operational mistakes, and natural hazards in repairable systems. The statistical analysis results serve as the foundation for time to failure modeling of petroleum pipelines utilizing Bayesian inference of the three-parameter Exponentiated Weibull distribution. The thesis also includes reliability probabilistic methods for assessing the influence of uncertainties in pipelines by comparing the First order reliability method (FORM) with the Second order reliability method (SORM) with demonstration on five distinct petroleum pipelines. The probability of failure and the reliability index for the five pipelines were calculated, along with the target reliability. Additionally, the influence of specific pipeline design parameters on the probability of failure was analyzed through sensitivity analysis. This analysis helps identify which design factors have the most significant impact on the likelihood of pipeline failure, allowing for more informed decisions in pipeline design and maintenance practices. By understanding these influences, it is possible to enhance the overall reliability and safety of the pipeline infrastructure. The framework developed in this thesis can be applied to enhance the performance of existing pipelines and serve as a good foundation for the design and implementation of new pipelines and similar infrastructures.