TY - GEN A1 - Heinz, Stefan A1 - Krumke, Sven A1 - Megow, Nicole A1 - Rambau, Jörg A1 - Tuchscherer, Andreas A1 - Vredeveld, Tjark T1 - The Online Target Date Assignment Problem N2 - Many online problems encountered in real-life involve a two-stage decision process: upon arrival of a new request, an irrevocable first-stage decision (the assignment of a specific resource to the request) must be made immediately, while in a second stage process, certain ``subinstances'' (that is, the instances of all requests assigned to a particular resource) can be solved to optimality (offline) later. We introduce the novel concept of an \emph{Online Target Date Assignment Problem} (\textsc{OnlineTDAP}) as a general framework for online problems with this nature. Requests for the \textsc{OnlineTDAP} become known at certain dates. An online algorithm has to assign a target date to each request, specifying on which date the request should be processed (e.\,g., an appointment with a customer for a washing machine repair). The cost at a target date is given by the \emph{downstream cost}, the optimal cost of processing all requests at that date w.\,r.\,t.\ some fixed downstream offline optimization problem (e.\,g., the cost of an optimal dispatch for service technicians). We provide general competitive algorithms for the \textsc{OnlineTDAP} independently of the particular downstream problem, when the overall objective is to minimize either the sum or the maximum of all downstream costs. As the first basic examples, we analyze the competitive ratios of our algorithms for the par ticular academic downstream problems of bin-packing, nonpreemptive scheduling on identical parallel machines, and routing a traveling salesman. T3 - ZIB-Report - 05-61 KW - Online Algorithms KW - Online Target Date Assignment Problem Y1 - 2005 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-8945 ER - TY - GEN A1 - Hiller, Benjamin A1 - Krumke, Sven A1 - Saliba, Sleman A1 - Tuchscherer, Andreas T1 - Randomized Online Algorithms for Dynamic Multi-Period Routing Problems N2 - The Dynamic Multi-Period Routing Problem DMPRP introduced by Angelelli et al. gives a model for a two-stage online-offline routing problem. At the beginning of each time period a set of customers becomes known. The customers need to be served either in the current time period or in the following. Postponed customers have to be served in the next time period. The decision whether to postpone a customer has to be done online. At the end of each time period, an optimal tour for the customers assigned to this period has to be computed and this computation can be done offline. The objective of the problem is to minimize the distance traveled over all planning periods assuming optimal routes for the customers selected in each period. We provide the first randomized online algorithms for the DMPRP which beat the known lower bounds for deterministic algorithms. For the special case of two planning periods we provide lower bounds on the competitive ratio of any randomized online algorithm against the oblivious adversary. We identify a randomized algorithm that achieves the optimal competitive ratio of $\frac{1+\sqrt{2}}{2}$ for two time periods on the real line. For three time periods, we give a randomized algorithm that is strictly better than any deterministic algorithm. T3 - ZIB-Report - 09-03 KW - Online-Optimierung KW - Randomisierte Algorithmen KW - Zweistufiges Problem KW - Traveling-Salesman-Problem KW - online optimization KW - randomized algorithm KW - two-stage problem KW - traveling salesman problem Y1 - 2009 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-11132 SN - 1438-0064 ER - TY - THES A1 - Tuchscherer, Andreas T1 - Local Evaluation of Policies for Discounted Markov Decision Problems N2 - Providing realistic performance indicators of online algorithms for a given online optimization problem is a difficult task in general. Due to significant drawbacks of other concepts like competitive analysis, Markov decision problems (MDPs) may yield an attractive alternative whenever reasonable stochastic information about future requests is available. However, the number of states in MDPs emerging from real applications is usually exponential in the original input parameters. Therefore, the standard methods for analyzing policies, i.e., online algorithms in our context, are infeasible. In this thesis we propose a new computational tool to evaluate the behavior of policies for discounted MDPs locally, i.e., depending on a particular initial state. The method is based on a column generation algorithm for approximating the total expected discounted cost of an unknown optimal policy, a concrete policy, or a single action (which assumes actions at other states to be made according to an optimal policy). The algorithm determines an $\varepsilon$-approximation by inspecting only relatively small local parts of the total state space. We prove that the number of states required for providing the approximation is independent of the total number of states, which underlines the practicability of the algorithm. The approximations obtained by our algorithm are typically much better than the theoretical bounds obtained by other approaches. We investigate the pricing problem and the structure of the linear programs encountered in the column generation. Moreover, we propose and analyze different extensions of the basic algorithm in order to achieve good approximations fast. The potential of our analysis tool is exemplified for discounted MDPs emerging from different online optimization problems, namely online bin coloring, online target date assignment, and online elevator control. The results of the experiments are quite encouraging: our method is mostly capable to provide performance indicators for online algorithms that much better reflect observations made in simulations than competitive analysis does. Moreover, the analysis allows to reveal weaknesses of the considered online algorithms. This way, we developed a new online algorithm for the online bin coloring problem that outperforms existing ones in our analyses and simulations. KW - Markov decision problem KW - online optimization KW - linear programming KW - column generation KW - performance guarantees Y1 - 2010 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-11963 ER -