Overview Statistic: PDF-Downloads (blue) and Frontdoor-Views (gray)
  • Treffer 2 von 2
Zurück zur Trefferliste

Markov Control Processes with Rare State Observation: Theory and Application to Treatment Scheduling in HIV-1

Zitieren Sie bitte immer diese URN: urn:nbn:de:0297-zib-41955
  • Markov Decision Processes (MDP) or Partially Observable MDPs (POMDP) are used for modelling situations in which the evolution of a process is partly random and partly controllable. These MDP theories allow for computing the optimal control policy for processes that can continuously or frequently be observed, even if only partially. However, they cannot be applied if state observation is very costly and therefore rare (in time). We present a novel MDP theory for rare, costly observations and derive the corresponding Bellman equation. In the new theory, state information can be derived for a particular cost after certain, rather long time intervals. The resulting information costs enter into the total cost and thus into the optimization criterion. This approach applies to many real world problems, particularly in the medical context, where the medical condition is examined rather rarely because examination costs are high. At the same time, the approach allows for efficient numerical realization. We demonstrate the usefulness of the novel theory by determining, from the national economic perspective, optimal therapeutic policies for the treatment of the human immunodefficiency virus (HIV) in resource-rich and resource-poor settings. Based on the developed theory and models, we discover that available drugs may not be utilized efficiently in resource-poor settings due to exorbitant diagnostic costs.

Volltext Dateien herunterladen

Metadaten exportieren

Metadaten
Verfasserangaben:Stefanie WinkelmannORCiD, Christof Schütte, Max von Kleist
Dokumentart:ZIB-Report
Freies Schlagwort / Tag:bellmann equation; diagnostic frequency; hidden state; information costs; optimal therapeutic policies; resource-poor; resource-rich
MSC-Klassifikation:49-XX CALCULUS OF VARIATIONS AND OPTIMAL CONTROL; OPTIMIZATION [See also 34H05, 34K35, 65Kxx, 90Cxx, 93-XX] / 49Nxx Miscellaneous topics / 49N30 Problems with incomplete information [See also 93C41]
60-XX PROBABILITY THEORY AND STOCHASTIC PROCESSES (For additional applications, see 11Kxx, 62-XX, 90-XX, 91-XX, 92-XX, 93-XX, 94-XX) / 60Jxx Markov processes / 60J27 Continuous-time Markov processes on discrete state spaces
60-XX PROBABILITY THEORY AND STOCHASTIC PROCESSES (For additional applications, see 11Kxx, 62-XX, 90-XX, 91-XX, 92-XX, 93-XX, 94-XX) / 60Jxx Markov processes / 60J28 Applications of continuous-time Markov processes on discrete state spaces
90-XX OPERATIONS RESEARCH, MATHEMATICAL PROGRAMMING / 90Cxx Mathematical programming [See also 49Mxx, 65Kxx] / 90C40 Markov and semi-Markov decision processes
93-XX SYSTEMS THEORY; CONTROL (For optimal control, see 49-XX) / 93Bxx Controllability, observability, and system structure / 93B07 Observability
93-XX SYSTEMS THEORY; CONTROL (For optimal control, see 49-XX) / 93Exx Stochastic systems and control / 93E20 Optimal stochastic control
Datum der Erstveröffentlichung:07.08.2013
Schriftenreihe (Bandnummer):ZIB-Report (13-34)
ISSN:1438-0064
Verlagspublikation:Appeared In: Comm. in Mathematical Sciences 12 (2014) 859-877
DOI:https://doi.org/10.4310/CMS.2014.v12.n5.a4
Accept ✔
Diese Webseite verwendet technisch erforderliche Session-Cookies. Durch die weitere Nutzung der Webseite stimmen Sie diesem zu. Unsere Datenschutzerklärung finden Sie hier.