### Refine

#### Year of publication

#### Document Type

- Article (12)
- ZIB-Report (5)
- Book (1)
- Doctoral Thesis (1)

#### Keywords

#### Institute

An estimated 2.7 million new HIV-1 infections occurred in 2010. `Treatment-for-prevention’ may strongly prevent HIV-1 transmission. The basic idea is that immediate treatment initiation rapidly decreases virus burden, which reduces the number of transmittable viruses and thereby the probability of infection. However, HIV inevitably develops drug resistance, which leads to virus rebound and nullifies the effect of `treatment-for-prevention’ for the time it remains unrecognized. While timely conducted treatment changes may avert periods of viral rebound, necessary treatment options and diagnostics may be lacking in resource-constrained settings. Within this work, we provide a mathematical platform for comparing different treatment paradigms that can be applied to many medical phenomena. We use this platform to optimize two distinct approaches for the treatment of HIV-1: (i) a diagnostic-guided treatment strategy, based on infrequent and patient-specific diagnostic schedules and (ii) a pro-active strategy that allows treatment adaptation prior to diagnostic ascertainment. Both strategies are compared to current clinical protocols (standard of care and the HPTN052 protocol) in terms of patient health, economic means and reduction in HIV-1 onward transmission exemplarily for South Africa. All therapeutic strategies are assessed using a coarse-grained stochastic model of within-host HIV dynamics and pseudo-codes for solving the respective optimal control problems are provided. Our mathematical model suggests that both optimal strategies (i)-(ii) perform better than the current clinical protocols and no treatment in terms of economic means, life prolongation and reduction of HIV-transmission. The optimal diagnostic-guided strategy suggests rare diagnostics and performs similar to the optimal pro-active strategy. Our results suggest that ‘treatment-for-prevention’ may be further improved using either of the two analyzed treatment paradigms.

Well-mixed stochastic chemical kinetics are properly modelled by the chemical master equation (CME) and associated Markov jump processes in molecule number space. If the reactants are present in large amounts, however, corresponding simulations of the stochastic dynamics become computationally expensive and model reductions are demanded. The classical model reduction approach uniformly rescales the overall dynamics to obtain deterministic systems characterized by ordinary differential equations, the well-known mass action reaction rate equations. For systems with multiple scales there exist hybrid approaches that keep parts of the system discrete while another part is approximated either using Langevin dynamics or deterministically. This paper aims at giving a coherent overview of the different hybrid approaches, focusing on their basic concepts and the relation between them. We derive a novel general description of such hybrid models that allows to express various forms by one type of equation. We also check in how far the approaches apply to model extensions of the CME for dynamics which do not comply with the central well-mixed condition and require some spatial resolution. A simple but meaningful gene expression system with negative self-regulation is analysed to illustrate the different approximation qualities of some of the hybrid approaches discussed.

Accurate modeling and numerical simulation of reaction kinetics is a topic of steady interest. We consider the spatiotemporal chemical master equation (ST-CME) as a model for stochastic reaction-diffusion systems that exhibit properties of metastability. The space of motion is decomposed into metastable compartments and diffusive motion is approximated by jumps between these compartments. Treating these jumps as first-order reactions, simulation of the resulting stochastic system is possible by the Gillespie method. We present the theory of Markov state models (MSM) as a theoretical foundation of this intuitive approach. By means of Markov state modeling, both the number and shape of compartments and the transition rates between them can be determined. We consider the ST-CME for two reaction-diffusion systems and compare it to more detailed models. Moreover, a rigorous formal justification of the ST-CME by Galerkin projection methods is presented.

This paper investigates the criterion of long-term average costs for a Markov decision process (MDP) which is not permanently observable. Each observation of the process produces a fixed amount of \textit{information costs} which enter the considered performance criterion and preclude from arbitrarily frequent state testing. Choosing the \textit{rare} observation times is part of the control procedure. In contrast to the theory of partially observable Markov decision processes, we consider an arbitrary continuous-time Markov process on a finite state space without further restrictions on the dynamics or the type of interaction.
Based on the original Markov control theory, we redefine the control model and the average cost criterion for the setting of information costs. We analyze the constant of average costs for the case of ergodic dynamics and present an optimality equation which characterizes the optimal choice of control actions and observation times. For this purpose, we construct an equivalent freely observable MDP and translate the well-known results from the original theory to the new setting.

This paper investigates the criterion of long-term average costs for a Markov decision process (MDP) which is not permanently observable. Each observation of the process produces a fixed amount of information costs which enter the considered performance criterion and preclude from arbitrarily frequent state testing. Choosing the rare observation times is part of the control procedure. In contrast to the theory of partially observable Markov
decision processes, we consider an arbitrary continuous-time Markov process on a finite state space without further restrictions on the dynamics or the type of interaction. Based on the original Markov control theory, we redefine the control model and the average cost criterion for the setting of information costs. We analyze the constant of average costs for the case of ergodic dynamics and present an optimality equation which characterizes the optimal choice of control actions and observation times. For this purpose, we construct an equivalent freely observable MDP and translate the well-known results from the original theory to the new setting.

Well-mixed stochastic chemical kinetics are properly modeled by the chemical master equation (CME) and associated Markov jump processes in molecule number space. If the reactants are present in large amounts, however, corresponding simulations of the stochastic dynamics become computationally expensive and model reductions are demanded. The classical model reduction approach uniformly rescales the overall dynamics to obtain deterministic systems characterized by ordinary differential equations, the well-known mass action reaction rate equations. For systems with multiple scales, there exist hybrid approaches that keep parts of the system discrete while another part is approximated either using Langevin dynamics or deterministically. This paper aims at giving a coherent overview of the different hybrid approaches, focusing on their basic concepts and the relation between them. We derive a novel general description of such hybrid models that allows expressing various forms by one type of equation. We also check in how far the approaches apply to model extensions of the CME for dynamics which do not comply with the central well-mixed condition and require some spatial resolution. A simple but meaningful gene expression system with negative self-regulation is analysed to illustrate the different approximation qualities of some of the hybrid approaches discussed. Especially, we reveal the cause of error in the case of small volume approximations.

Many real-world processes can naturally be modeled as systems of interacting agents. However, the long-term simulation of such agent-based models is often intractable when the system becomes too large. In this paper, starting from a stochastic spatio-temporal agent-based model (ABM), we present a reduced model in terms of stochastic PDEs that describes the evolution of agent number densities for large populations. We discuss the algorithmic details of both approaches; regarding the SPDE model, we apply Finite Element discretization in space which not only ensures efficient simulation but also serves as a regularization of the SPDE. Illustrative examples for the spreading of an innovation among agents are given and used for comparing ABM and SPDE models.

Modeling, simulation and analysis of interacting agent systems is a broad field of research, with existing approaches reaching from informal descriptions of interaction dynamics to more formal, mathematical models.
In this paper, a continuous-time stochastic agent-based model is formulated, the corresponding Markov jump process is defined and its approximation by ordinary and stochastic differential equations (ODEs and SDEs, respectively) is described.
We provide a functional and transparent framework which allows for rigorous analysis, avoids problems of ambiguity and delivers straightforward connections to other modeling approaches.
We demonstrate the advantages of an SDE model for different scenarios of interacting agent systems with medium or large population sizes. In comparison to the ODE limit model, the SDE gives a higher order approximation of the underlying Markov jump process, both on a pathwise level and regarding the process' moments. In particular, the SDE approach is able to retain metastabilty in the dynamics, which is lost in a deterministic ODE description, and to capture the distribution of rare and unlikely extreme events. Here, we apply the theory of large deviations to show consistency of the distributions' remote tails.