TY - GEN A1 - Winkelmann, Stefanie A1 - Schütte, Christof A1 - von Kleist, Max T1 - Markov Control Processes with Rare State Observation: Theory and Application to Treatment Scheduling in HIV-1 N2 - Markov Decision Processes (MDP) or Partially Observable MDPs (POMDP) are used for modelling situations in which the evolution of a process is partly random and partly controllable. These MDP theories allow for computing the optimal control policy for processes that can continuously or frequently be observed, even if only partially. However, they cannot be applied if state observation is very costly and therefore rare (in time). We present a novel MDP theory for rare, costly observations and derive the corresponding Bellman equation. In the new theory, state information can be derived for a particular cost after certain, rather long time intervals. The resulting information costs enter into the total cost and thus into the optimization criterion. This approach applies to many real world problems, particularly in the medical context, where the medical condition is examined rather rarely because examination costs are high. At the same time, the approach allows for efficient numerical realization. We demonstrate the usefulness of the novel theory by determining, from the national economic perspective, optimal therapeutic policies for the treatment of the human immunodefficiency virus (HIV) in resource-rich and resource-poor settings. Based on the developed theory and models, we discover that available drugs may not be utilized efficiently in resource-poor settings due to exorbitant diagnostic costs. T3 - ZIB-Report - 13-34 KW - information costs KW - hidden state KW - bellmann equation KW - optimal therapeutic policies KW - diagnostic frequency KW - resource-poor KW - resource-rich Y1 - 2013 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-41955 SN - 1438-0064 ER - TY - GEN A1 - Winkelmann, Stefanie T1 - Markov Control with Rare State Observation: Average Optimality N2 - This paper investigates the criterion of long-term average costs for a Markov decision process (MDP) which is not permanently observable. Each observation of the process produces a fixed amount of \textit{information costs} which enter the considered performance criterion and preclude from arbitrarily frequent state testing. Choosing the \textit{rare} observation times is part of the control procedure. In contrast to the theory of partially observable Markov decision processes, we consider an arbitrary continuous-time Markov process on a finite state space without further restrictions on the dynamics or the type of interaction. Based on the original Markov control theory, we redefine the control model and the average cost criterion for the setting of information costs. We analyze the constant of average costs for the case of ergodic dynamics and present an optimality equation which characterizes the optimal choice of control actions and observation times. For this purpose, we construct an equivalent freely observable MDP and translate the well-known results from the original theory to the new setting. T3 - ZIB-Report - 16-59 KW - Markov decision process KW - partial observability KW - average optimality KW - information costs Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-60981 SN - 1438-0064 ER - TY - GEN A1 - Heinz, Stefan A1 - Kaibel, Volker A1 - Peinhardt, Matthias A1 - Rambau, Jörg A1 - Tuchscherer, Andreas T1 - LP-Based Local Approximation for Markov Decision Problems N2 - The standard computational methods for computing the optimal value functions of Markov Decision Problems (MDP) require the exploration of the entire state space. This is practically infeasible for applications with huge numbers of states as they arise, e.\,g., from modeling the decisions in online optimization problems by MDPs. Exploiting column generation techniques, we propose and apply an LP-based method to determine an $\varepsilon$-approximation of the optimal value function at a given state by inspecting only states in a small neighborhood. In the context of online optimization problems, we use these methods in order to evaluate the quality of concrete policies with respect to given initial states. Moreover, the tools can also be used to obtain evidence of the impact of single decisions. This way, they can be utilized in the design of policies. T3 - ZIB-Report - 06-20 KW - Markov decision problem KW - linear programming KW - column generation Y1 - 2006 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-9131 ER -