Refine
Year of publication
- 2024 (5)
- 2023 (14)
- 2022 (9)
- 2021 (25)
- 2020 (11)
- 2019 (7)
- 2018 (12)
- 2017 (19)
- 2016 (17)
- 2015 (15)
- 2014 (20)
- 2013 (17)
- 2012 (17)
- 2011 (7)
- 2010 (12)
- 2009 (9)
- 2008 (9)
- 2007 (10)
- 2006 (8)
- 2005 (8)
- 2004 (7)
- 2003 (4)
- 2002 (4)
- 2001 (4)
- 2000 (5)
- 1999 (20)
- 1998 (8)
- 1997 (6)
- 1996 (8)
- 1995 (8)
- 1994 (2)
- 1993 (3)
- 1992 (2)
Document Type
- Article (212)
- ZIB-Report (70)
- In Proceedings (23)
- In Collection (9)
- Book chapter (7)
- Book (2)
- Doctoral Thesis (2)
- Other (2)
- Poster (2)
- Habilitation (1)
Is part of the Bibliography
- no (332)
Keywords
- metastability (8)
- cycle decomposition (3)
- rare events (3)
- reaction coordinate (3)
- Bayesian inference (2)
- Clustering (2)
- DS-MLE (2)
- EM algorithm (2)
- Jeffreys prior (2)
- MPLE (2)
Institute
- Numerical Mathematics (151)
- Modeling and Simulation of Complex Processes (58)
- ZIB Allgemein (29)
- Visual and Data-centric Computing (23)
- Computational Systems Biology (18)
- Computational Molecular Design (13)
- Visual Data Analysis (13)
- Mathematics for Life and Materials Science (5)
- Bioinformatics in Medicine (3)
- AI in Society, Science, and Technology (1)
Two different approaches to parameter estimation (PE) in the context of polymerization are introduced, refined, combined, and applied. The first is classical PE where one is interested in finding parameters which minimize the distance between the output of a chemical model and experimental data. The second is Bayesian PE allowing for quantifying parameter uncertainty caused by experimental measurement error and model imperfection. Based on detailed descriptions of motivation, theoretical background, and methodological aspects for both approaches, their relation are outlined. The main aim of this article is to show how the two approaches complement each other and can be used together to generate strong information gain regarding the model and its parameters. Both approaches and their interplay in application to polymerization reaction systems are illustrated. This is the first part in a two-article series on parameter estimation for polymer reaction kinetics with a focus on theory and methodology while in the second part a more complex example will be considered.
We present a numerical method to model dynamical systems from data. We use the recently introduced method Scalable Probabilistic Approximation (SPA) to project points from a Euclidean space to convex polytopes and represent these projected states of a system in new, lower-dimensional coordinates denoting their position in the polytope. We then introduce a specific nonlinear transformation to construct a model of the dynamics in the polytope and to transform back into the original state space. To overcome the potential loss of information from the projection to a lower-dimensional polytope, we use memory in the sense of the delay-embedding theorem of Takens. By construction, our method produces stable models. We illustrate the capacity of the method to reproduce even chaotic dynamics and attractors with multiple connected components on various examples.
We investigate opinion dynamics based on an agent-based model and are interested in predicting the evolution of the percentages of the entire agent population that share an opinion. Since these opinion percentages can be seen as an aggregated observation of the full system state, the individual opinions of each agent, we view this in the framework of the Mori–Zwanzig projection formalism. More specifically, we show how to estimate a nonlinear autoregressive model (NAR) with memory from data given by a time series of opinion percentages, and discuss its prediction capacities for various specific topologies of the agent interaction network. We demonstrate that the inclusion of memory terms significantly improves the prediction quality on examples with different network topologies.
Mathematical modeling of spatio-temporal population dynamics and application to epidemic spreading
(2021)
Agent based models (ABMs) are a useful tool for modeling spatio-temporal population dynamics, where many details can be included in the model description. Their computational cost though is very high and for stochastic ABMs a lot of individual simulations are required to sample quantities of interest. Especially, large numbers of agents render the sampling infeasible. Model reduction to a metapopulation model leads to a significant gain in computational efficiency, while preserving important dynamical properties. Based on a precise mathematical description of spatio-temporal ABMs, we present two different metapopulation approaches (stochastic and piecewise deterministic) and discuss the approximation steps between the different models within this framework. Especially, we show how the stochastic metapopulation model results from a Galerkin projection of the underlying ABM onto a finite-dimensional ansatz space. Finally, we utilize our modeling framework to provide a conceptual model for the spreading of COVID-19 that can be scaled to real-world scenarios.
Markov Decision Processes (MDP) or Partially Observable MDPs (POMDP) are
used for modelling situations in which the evolution of a process is partly random and
partly controllable. These MDP theories allow for computing the optimal control policy
for processes that can continuously or frequently be observed, even if only partially.
However, they cannot be applied if state observation is very costly and therefore rare
(in time). We present a novel MDP theory for rare, costly observations and derive the
corresponding Bellman equation. In the new theory, state information can be derived
for a particular cost after certain, rather long time intervals. The resulting information
costs enter into the total cost and thus into the optimization criterion. This approach
applies to many real world problems, particularly in the medical context, where the
medical condition is examined rather rarely because examination costs are high. At the
same time, the approach allows for efficient numerical realization. We demonstrate the
usefulness of the novel theory by determining, from the national economic perspective,
optimal therapeutic policies for the treatment of the human immunodefficiency virus
(HIV) in resource-rich and resource-poor settings. Based on the developed theory and
models, we discover that available drugs may not be utilized efficiently in resource-poor
settings due to exorbitant diagnostic costs.
We present the theory of “Markov decision processes (MDP) with rare state observation” and apply it to optimal treatment scheduling and diagnostic testing to mitigate HIV-1 drug resistance development in resource-poor countries. The developed theory assumes that the state of the process is hidden and can only be determined by making an examination. Each examination produces costs which enter into the considered cost functional so that the resulting optimization problem includes finding optimal examination times. This is a realistic ansatz: In many real world applications, like HIV-1 treatment scheduling, the information about the disease evolution involves substantial costs, such that examination and control are intimately connected. However, a perfect compliance with the optimal strategy can rarely be achieved. This may be particularly true for HIV-1 resistance testing in resource-constrained countries. In the present work, we therefore analyze the sensitivity of the costs with respect to deviations from the optimal examination times both analytically and for the considered application. We
discover continuity in the cost-functional with respect to the examination times. For the HIV-application, moreover, sensitivity towards small deviations from the optimal examination rule depends on the disease state. Furthermore, we compare the optimal rare-control strategy to (i) constant control strategies (one action for the remaining time) and to (ii) the permanent control of the original, fully observed MDP. This comparison is
done in terms of expected costs and in terms of life-prolongation. The proposed rare-control strategy offers a clear benefit over a constant control, stressing the usefulness of medical testing and informed decision making. This indicates that lower-priced medical tests could improve HIV treatment in resource-constrained settings and warrants further investigation.
Markov Decision Processes (MDP) or Partially Observable MDPs (POMDP) are used for modelling situations in which the evolution of a process is partly random and partly controllable. These MDP theories allow for computing the optimal control policy for processes that can continuously or frequently be observed, even if only partially. However, they cannot be applied if state observation is very costly and therefore rare (in time). We present a novel MDP theory for rare, costly observations and derive the corresponding Bellman equation. In the new theory, state information can be derived for a particular cost after certain, rather long time intervals. The resulting information costs enter into the total cost and thus into the optimization criterion. This approach applies to many real world problems, particularly in the medical context, where the medical condition is examined rather rarely because examination costs are high. At the same time, the approach allows for efficient numerical realization. We demonstrate the usefulness of the novel theory by determining, from the national economic perspective, optimal therapeutic policies for the treatment of the human immunodeficiency virus (HIV) in resource-rich and resource-poor settings. Based on the developed theory and models, we discover that available drugs may not be utilized efficiently in resource-poor settings due to exorbitant diagnostic costs.