TY - GEN A1 - Fujii, Koichi A1 - Ito, Naoki A1 - Kim, Sunyoung A1 - Kojima, Masakazu A1 - Shinano, Yuji A1 - Toh, Kim-Chuan T1 - 大規模二次割当問題への挑戦 T2 - 統計数理研究所共同研究リポート 453 最適化:モデリングとアルゴリズム33 2022年3月 「大規模二次割当問題への挑戦」 p.84-p.92 N2 - 二次割当問題は線形緩和が弱いことが知られ,強化のため多様な緩和手法が考案されているが,その一つである二重非負値計画緩和( DNN 緩和)及びその解法として近年研究が進んでいるニュートン・ブラケット法を紹介し,それらに基づく分枝限定法の実装及び数値実験結果について報告する. T2 - Solving Large Scale QAPs with DNN-based Branch-and-bound : a progress report T3 - ZIB-Report - 22-11 Y1 - 2022 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-86779 SN - 1438-0064 ER - TY - GEN A1 - Hasler, Tim A1 - Peters-Kottig, Wolfgang T1 - Vorschrift oder Thunfisch? – Zur Langzeitverfügbarkeit von Forschungsdaten N2 - „Ich mache ihm ein Angebot, das er nicht ablehnen kann.” Diese Aussage aus einem gänzlich anderen Kontext lässt sich recht treffend übertragen als Wunsch von Dienstleistern und Zweck von Dienstleistungen für Datenproduzenten im Forschungsdatenmanagement. Zwar wirkt Druck zur Datenübergabe nicht förderlich, die Eröffnung einer Option aber sehr wohl. Im vorliegenden Artikel geht es um das Verständnis der Nachhaltigkeit von Forschung und ihren Daten anhand der Erkenntnisse und Erfahrungen aus der ersten Phase des DFG-Projekts EWIG. [Fn 01] Eine Auswahl von Fallstricken beim Forschungsdatenmanagement wird anhand der Erkenntnisse aus Expertengesprächen und eigenen Erfahrungen beim Aufbau von LZA-Workflows vorgestellt. Erste Konzepte in EWIG zur Datenübertragung aus unterschiedlich strukturierten Datenquellen in die „Langfristige Domäne” werden beschrieben. N2 - "I'm gonna make him an offer he can't refuse". This quote from a completely different context can be aptly rendered as a statement of service providers as well as the purpose of services for data producers in the field of research data management. Although pressure is not the leverage of choice if you want researchers to deposit their research data in some kind of repository, offering an option does the trick quite well. In this article we present some of the concepts for sustainability of research and its data from the first phase the of the project EWIG, funded by the Deutsche Forschungsgemeinschaft. A selection of pitfalls in research data management is presented based on the findings from expert interviews and our own experiences in the construction of LTP workflows. First concepts in EWIG to transfer data from differently structured data sources into the "Permanent Domain" are described. T3 - ZIB-Report - 13-70 KW - Langzeitverfügbarkeit KW - Forschungsdatenmanagement Y1 - 2013 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-43010 UR - http://libreas.eu/ausgabe23/08hasler/ SN - 1438-0064 ER - TY - GEN A1 - Shinano, Yuji T1 - UG - Ubiquity Generator Framework v1.0.0beta N2 - UG is a generic framework to parallelize branch-and-bound based solvers (e.g., MIP, MINLP, ExactIP) in a distributed or shared memory computing environment. It exploits the powerful performance of state-of-the-art "base solvers", such as SCIP, CPLEX, etc. without the need for base solver parallelization. UG framework, ParaSCIP(ug[SCIP,MPI]) and FiberSCIP (ug[SCIP,Pthreads]) are available as a beta version. v1.0.0: new documentation and cmake, generalization of ug framework, implementation of selfsplitrampup for fiber- and parascip, better memory and time limit handling. KW - parallelization framework KW - branch-and-bound parallelization KW - integer optimization Y1 - 2021 U6 - https://doi.org/10.12752/8521 ER - TY - GEN A1 - Shinano, Yuji T1 - UG - Ubiquity Generator Framework v0.9.1 N2 - UG is a generic framework to parallelize branch-and-bound based solvers (e.g., MIP, MINLP, ExactIP) in a distributed or shared memory computing environment. It exploits the powerful performance of state-of-the-art "base solvers", such as SCIP, CPLEX, etc. without the need for base solver parallelization. UG framework, ParaSCIP(ug[SCIP,MPI]) and FiberSCIP (ug[SCIP,Pthreads]) are available as a beta version. For MIP solving, ParaSCIP and FiberSCIP are well debugged and should be stable. For MINLP solving, they are relatively stable, but not as thoroughly debugged. This release version should handle branch-and-cut approaches where subproblems are defined by variable bounds and also by constrains for ug[SCIP,*] ParaSCIP and FiberSCIP). Therefore, problem classes other than MIP or MINLP can be handled, but they have not been tested yet. v0.9.1: Update orbitope cip files. KW - parallelization framework KW - branch-and-bound parallelization KW - integer optimization Y1 - 2020 U6 - https://doi.org/10.12752/8508 ER - TY - GEN A1 - Shinano, Yuji T1 - The Ubiquity Generator Framework: 7 Years of Progress in Parallelizing Branch-and-Bound N2 - Mixed integer linear programming (MIP) is a general form to model combinatorial optimization problems and has many industrial applications. The performance of MIP solvers has improved tremendously in the last two decades and these solvers have been used to solve many real-word problems. However, against the backdrop of modern computer technology, parallelization is of pivotal importance. In this way, ParaSCIP is the most successful parallel MIP solver in terms of solving previously unsolvable instances from the well-known benchmark instance set MIPLIB by using supercomputers. It solved two instances from MIPLIB2003 and 12 from MIPLIB2010 for the first time to optimality by using up to 80,000 cores on supercomputers. ParaSCIP has been developed by using the Ubiquity Generator (UG) framework, which is a general software package to parallelize any state-of-the-art branch-and-bound based solver. This paper discusses 7 years of progress in parallelizing branch-and-bound solvers with UG. T3 - ZIB-Report - 17-60 KW - Parallelization, Branch-and-bound, Mixed Integer Programming, UG, ParaSCIP, FiberSCIP, ParaXpress, FiberXpress, SCIP-Jack Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-65545 SN - 1438-0064 ER - TY - GEN A1 - Dercksen, Vincent J. A1 - Hege, Hans-Christian A1 - Oberlaender, Marcel T1 - The Filament Editor: An Interactive Software Environment for Visualization, Proof-Editing and Analysis of 3D Neuron Morphology N2 - Neuroanatomical analysis, such as classification of cell types, depends on reliable reconstruction of large numbers of complete 3D dendrite and axon morphologies. At present, the majority of neuron reconstructions are obtained from preparations in a single tissue slice in vitro, thus suffering from cut off dendrites and, more dramatically, cut off axons. In general, axons can innervate volumes of several cubic millimeters and may reach path lengths of tens of centimeters. Thus, their complete reconstruction requires in vivo labeling, histological sectioning and imaging of large fields of view. Unfortunately, anisotropic background conditions across such large tissue volumes, as well as faintly labeled thin neurites, result in incomplete or erroneous automated tracings and even lead experts to make annotation errors during manual reconstructions. Consequently, tracing reliability renders the major bottleneck for reconstructing complete 3D neuron morphologies. Here, we present a novel set of tools, integrated into a software environment named ‘Filament Editor’, for creating reliable neuron tracings from sparsely labeled in vivo datasets. The Filament Editor allows for simultaneous visualization of complex neuronal tracings and image data in a 3D viewer, proof-editing of neuronal tracings, alignment and interconnection across sections, and morphometric analysis in relation to 3D anatomical reference structures. We illustrate the functionality of the Filament Editor on the example of in vivo labeled axons and demonstrate that for the exemplary dataset the final tracing results after proof-editing are independent of the expertise of the human operator. T3 - ZIB-Report - 13-75 Y1 - 2013 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-43157 SN - 1438-0064 ER - TY - GEN A1 - Wende, Florian A1 - Steinke, Thomas T1 - Swendsen-Wang Multi-Cluster Algorithm for the 2D/3D Ising Model on Xeon Phi and GPU N2 - Simulations of the critical Ising model by means of local update algorithms suffer from critical slowing down. One way to partially compensate for the influence of this phenomenon on the runtime of simulations is using increasingly faster and parallel computer hardware. Another approach is using algorithms that do not suffer from critical slowing down, such as cluster algorithms. This paper reports on the Swendsen-Wang multi-cluster algorithm on Intel Xeon Phi coprocessor 5110P, Nvidia Tesla M2090 GPU, and x86 multi-core CPU. We present shared memory versions of the said algorithm for the simulation of the two- and three-dimensional Ising model. We use a combination of local cluster search and global label reduction by means of atomic hardware primitives. Further, we describe an MPI version of the algorithm on Xeon Phi and CPU, respectively. Significant performance improvements over known im plementations of the Swendsen-Wang algorithm are demonstrated. T3 - ZIB-Report - 13-44 KW - Swendsen-Wang Multi-Cluster Algorithm KW - Ising Model KW - Xeon Phi KW - GPGPU KW - Connected Component Labeling Y1 - 2013 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-42187 SN - 1438-0064 ER - TY - GEN A1 - Hiller, Benjamin A1 - Vredeveld, Tjark T1 - Stochastic dominance analysis of Online Bin Coloring algorithms N2 - This paper proposes a new method for probabilistic analysis of online algorithms. It is based on the notion of stochastic dominance. We develop the method for the online bin coloring problem introduced by Krumke et al (2008). Using methods for the stochastic comparison of Markov chains we establish the result that the performance of the online algorithm GreedyFit is stochastically better than the performance of the algorithm OneBin for any number of items processed. This result gives a more realistic picture than competitive analysis and explains the behavior observed in simulations. T3 - ZIB-Report - 12-42 KW - online algorithms, stochastic dominance, algorithm analysis, Markov chains Y1 - 2012 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-16502 SN - 1438-0064 ER - TY - GEN A1 - Shinano, Yuji A1 - Achterberg, Tobias A1 - Berthold, Timo A1 - Heinz, Stefan A1 - Koch, Thorsten A1 - Winkler, Michael T1 - Solving Previously Unsolved MIP Instances with ParaSCIP on Supercomputers by using up to 80,000 Cores N2 - Mixed-integer programming (MIP) problem is arguably among the hardest classes of optimization problems. This paper describes how we solved 21 previously unsolved MIP instances from the MIPLIB benchmark sets. To achieve these results we used an enhanced version of ParaSCIP, setting a new record for the largest scale MIP computation: up to 80,000 cores in parallel on the Titan supercomputer. In this paper, we describe the basic parallelization mechanism of ParaSCIP, improvements of the dynamic load balancing and novel techniques to exploit the power of parallelization for MIP solving. We give a detailed overview of computing times and statistics for solving open MIPLIB instances. T3 - ZIB-Report - 20-16 KW - Mixed Integer Programming, Parallel processing, Node merging, Racing, ParaSCIP, Ubiquity Generator Framework, MIPLIB Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-78393 SN - 1438-0064 ER - TY - GEN A1 - Shinano, Yuji A1 - Achterberg, Tobias A1 - Berthold, Timo A1 - Heinz, Stefan A1 - Koch, Thorsten A1 - Winkler, Michael T1 - Solving Open MIP Instances with ParaSCIP on Supercomputers using up to 80,000 Cores N2 - This paper describes how we solved 12 previously unsolved mixed-integer program- ming (MIP) instances from the MIPLIB benchmark sets. To achieve these results we used an enhanced version of ParaSCIP, setting a new record for the largest scale MIP computation: up to 80,000 cores in parallel on the Titan supercomputer. In this paper we describe the basic parallelization mechanism of ParaSCIP, improvements of the dynamic load balancing and novel techniques to exploit the power of parallelization for MIP solving. We give a detailed overview of computing times and statistics for solving open MIPLIB instances. T3 - ZIB-Report - 15-53 KW - Mixed Integer Programming KW - Parallel processing KW - Node merging KW - Racing ParaSCIP KW - Ubiquity Generator Framework KW - MIPLIB Y1 - 2015 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-56404 SN - 1438-0064 ER - TY - GEN A1 - Fujii, Koichi A1 - Ito, Naoki A1 - Kim, Sunyoung A1 - Kojima, Masakazu A1 - Shinano, Yuji A1 - Toh, Kim-Chuan T1 - Solving Challenging Large Scale QAPs N2 - We report our progress on the project for solving larger scale quadratic assignment problems (QAPs). Our main approach to solve large scale NP-hard combinatorial optimization problems such as QAPs is a parallel branch-and-bound method efficiently implemented on a powerful computer system using the Ubiquity Generator(UG) framework that can utilize more than 100,000 cores. Lower bounding procedures incorporated in the branch-and-bound method play a crucial role in solving the problems. For a strong lower bounding procedure, we employ the Lagrangian doubly nonnegative (DNN) relaxation and the Newton-bracketing method developed by the authors’ group. In this report, we describe some basic tools used in the project including the lower bounding procedure and branching rules, and present some preliminary numerical results. Our next target problem is QAPs with dimension at least 50, as we have succeeded to solve tai30a and sko42 from QAPLIB for the first time. T3 - ZIB-Report - 21-02 KW - QAP KW - Parallel Branch-and-Bound Y1 - 2021 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-81303 SN - 1438-0064 ER - TY - THES A1 - Keidel, Stefan T1 - Snapshots in Scalaris N2 - Eines der größten Hindernisse beim praktischen Einsatz von Scalaris, einer skalierbaren Implementierung einer verteilten Hashtabelle mit Unterstützung für Transaktionen, ist das Fehlen eines Verfahrens zur Aufnahme eines konsistenten Zustandes des gesamten Systems. Wir stellen in dieser Arbeit ein einfaches Protokoll vor, dass diese Aufgabe erfüllt und sich, auf Grund der von uns gewählten Herangehensweise, leicht implementieren lässt. Als Ausgangspunkt dafür wählen wir aus einer Reihe von „klassischen“ Snapshot-Algorithmen ein 1993 von Mattern entworfenes Verfahren, welches auf dem Algorithmus von Lai und Yang basiert, aus. Diese Entscheidung basiert auf einer gründlichen Analyse der Protokolle unter Berücksichtigung der Architektur der existierenden Software. Im nächsten Arbeitsschritt benutzen wir unser vollständiges Wissen über die Interna des Transaktionssystems von Scalaris und vereinfachen damit das Verfahren hinsichtlich Benutzbarkeit und Implementierungskomplexität, ohne die Anforderungen an den aufgenommenen Zustand aufzuweichen. Statt einer losen Anhäufung lokaler Zustände der einzelnen Teilnehmerknoten können wir am Ende eine große Schlüssel-Wert-Tabelle als Ergebnis erzeugen, die konsistent ist, sich leicht weiterverarbeiten lässt und die einem Zustand entspricht, in dem sich das System einmal befunden haben könnte. Nachdem wir das Verfahren dann in Software umgesetzt haben, werten wir die Ergebnisse hinsichtlich des Einflusses auf die Performanz des Gesamtsystems aus und diskutieren mögliche Weiterentwicklungen. KW - scalaris KW - dht KW - algorithm Y1 - 2013 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-42282 ER - TY - GEN A1 - Özel, M. Neset A1 - Kulkarni, Abhishek A1 - Hasan, Amr A1 - Brummer, Josephine A1 - Moldenhauer, Marian A1 - Daumann, Ilsa-Maria A1 - Wolfenberg, Heike A1 - Dercksen, Vincent J. A1 - Kiral, F. Ridvan A1 - Weiser, Martin A1 - Prohaska, Steffen A1 - von Kleist, Max A1 - Hiesinger, Peter Robin T1 - Serial synapse formation through filopodial competition for synaptic seeding factors N2 - Following axon pathfinding, growth cones transition from stochastic filopodial exploration to the formation of a limited number of synapses. How the interplay of filopodia and synapse assembly ensures robust connectivity in the brain has remained a challenging problem. Here, we developed a new 4D analysis method for filopodial dynamics and a data-driven computational model of synapse formation for R7 photoreceptor axons in developing Drosophila brains. Our live data support a 'serial synapse formation' model, where at any time point only a single 'synaptogenic' filopodium suppresses the synaptic competence of other filopodia through competition for synaptic seeding factors. Loss of the synaptic seeding factors Syd-1 and Liprin-α leads to a loss of this suppression, filopodial destabilization and reduced synapse formation, which is sufficient to cause the destabilization of entire axon terminals. Our model provides a filopodial 'winner-takes-all' mechanism that ensures the formation of an appropriate number of synapses. T3 - ZIB-Report - 19-45 KW - filopodia KW - growth cone dynamics KW - brain wiring KW - 2-photon microscopy KW - model Y1 - 2019 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-74397 SN - 1438-0064 ER - TY - GEN A1 - Bley, Andreas A1 - D'Andreagiovanni, Fabio A1 - Karch, Daniel T1 - Scheduling technology migration in WDM Networks N2 - The rapid technological evolution of telecommunication networks demands service providers to regularly update their technology, with the aim of remaining competitive in the marketplace. However, upgrading the technology in a network is not a trivial task. New hardware components need to be installed in the network and during the installation network connectivity may be temporarily compromised. The Wavelength Division Multiplexing (WDM) technology, whose upgrade is considered in here, shares fiber links among several optical connections and tearing down a single link may disrupt several optical connections at once. When the upgrades involve large parts of a network, typically not all links can be upgraded in parallel, which may lead to an unavoidable longer disruption of some connections. A bad scheduling of the overall endeavor, however, can dramatically increase the disconnection time of parts of the networks, causing extended service disruption. In this contribution, we study the problem of finding a schedule of the fiber link upgrades that minimizes the total service disruption time. To the best of our knowledge, this problem has not yet been formalized and investigated. The aim of our work is to close this gap by presenting a mathematical optimization model for the problem and an innovative solution algorithm that tackles the intrinsic difficulties of the problem. Computational experience on realistic instances completes our study. Our original investigations have been driven by real needs of DFN, operator of the German National Research and Education Network and our partner in the BMBF research project ROBUKOM (http://www.robukom.de/). T3 - ZIB-Report - 13-62 KW - Scheduling, Extended Formulations, Network Migration, WDM Networks Y1 - 2013 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-42654 UR - http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6507677&isnumber=6507670 SN - 1438-0064 ER - TY - GEN A1 - Baum, Daniel A1 - Lindow, Norbert A1 - Hege, Hans-Christian A1 - Lepper, Verena A1 - Siopi, Tzulia A1 - Kutz, Frank A1 - Mahlow, Kristin A1 - Mahnke, Heinz-Eberhard T1 - Revealing hidden text in rolled and folded papyri N2 - Ancient Egyptian papyri are often folded, rolled up or kept as small packages, sometimes even sealed. Physically unrolling or unfolding these packages might severely damage them. We demonstrate a way to get access to the hidden script without physical unfolding by employing computed tomography and mathematical algorithms for virtual unrolling and unfolding. Our algorithmic approaches are combined with manual interaction. This provides the necessary flexibility to enable the unfolding of even complicated and partly damaged papyrus packages. In addition, it allows us to cope with challenges posed by the structure of ancient papyrus, which is rather irregular, compared to other writing substrates like metallic foils or parchment. Unfolding of packages is done in two stages. In the first stage, we virtually invert the physical folding process step by step until the partially unfolded package is topologically equivalent to a scroll or a papyrus sheet folded only along one fold line. To minimize distortions at this stage, we apply the method of moving least squares. In the second stage, the papyrus is simply flattened, which requires the definition of a medial surface. We have applied our software framework to several papyri. In this work, we present the results of applying our approaches to mockup papyri that were either rolled or folded along perpendicular fold lines. In the case of the folded papyrus, our approach represents the first attempt to address the unfolding of such complicated folds. T3 - ZIB-Report - 17-02 KW - unfolding, papyri, computed tomography Y1 - 2017 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-61826 SN - 1438-0064 ER - TY - THES A1 - Witzig, Jakob T1 - Reoptimization Techniques in MIP Solvers N2 - Many optimization problems can be modeled as Mixed Integer Programs (MIPs). In general, MIPs cannot be solved efficiently, since solving MIPs is NP-hard, see, e.g., Schrijver, 2003. Common methods for solving NP-hard problems are branch-and-bound and column generation. In the case of column generation, the original problem becomes decomposed or re-formulated into one ore more smaller subproblems, which are easier to solve. Each of these subproblems is solved separately and recurrently, which can be interpreted as solving a sequence of optimization problems. In this thesis, we consider a sequence of MIPs which only differ in the respective objective functions. Furthermore, we assume each of these MIPs get solved with a branch-and-bound algorithm. This thesis aims to figure out whether the solving process of a given sequence of MIPs can be accelerated by reoptimization. As reoptimization we understand starting the solving process of a MIP of this sequence at a given frontier of a search tree corresponding to another MIP of this sequence. At the beginning we introduce an LP-based branch-and-bound algorithm. This algorithm is inspired by the reoptimizing algorithm of Hiller, Klug, and the author of this thesis, 2013. Since most of the state-of-the-art MIP solvers come to decisions based on dual information, which leads to the loss of feasible solutions after changing the objective function, we present a technique to guarantee optimality despite using these information. A decision is based on a dual information if this decision is valid for at least one feasible solution, whereas a decision is based on a primal information if this decision is valid for all feasible solutions. Afterwards, we consider representing the search frontier of the tree by a set of nodes of a given size. We call this the Tree Compression Problem. Moreover, we present a criterion characterizing the similarity of two objective functions. To evaluate our approach of reoptimization we extend the well-known and well-maintained MIP solver SCIP to an LP-based branch-and-bound framework, introduce two heuristics for solving the Tree Compression Problem, and a primal heuristic which is especially fitted to column generation. Finally, we present computational experiments on several problem classes, e.g., the Vertex Coloring and k-Constrained Shortest Path. Our experiments show, that a straightforward reoptimization, i.e., without additional heuristics, provides no benefit in general. However, in combination with the techniques and methods presented in this thesis, we can accelerate the solving of a given sequence up to the factor 14. For this purpose it is essential to take the differences of the objective functions into account and to restart the reoptimization, i.e., solve the subproblem from scratch, if the objective functions are not similar enough. Finally, we discuss the possibility to parallelize the solving process of the search frontier at the beginning of each solving process. Y1 - 2014 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-54067 ER - TY - GEN A1 - Lie, Han Cheng A1 - Sullivan, T. J. A1 - Teckentrup, Aretha T1 - Random forward models and log-likelihoods in Bayesian inverse problems T2 - SIAM/ASA Journal on Uncertainty Quantification N2 - We consider the use of randomised forward models and log-likelihoods within the Bayesian approach to inverse problems. Such random approximations to the exact forward model or log-likelihood arise naturally when a computationally expensive model is approximated using a cheaper stochastic surrogate, as in Gaussian process emulation (kriging), or in the field of probabilistic numerical methods. We show that the Hellinger distance between the exact and approximate Bayesian posteriors is bounded by moments of the difference between the true and approximate log-likelihoods. Example applications of these stability results are given for randomised misfit models in large data applications and the probabilistic solution of ordinary differential equations. T3 - ZIB-Report - 18-03 KW - Bayesian inverse problem KW - random likelihood KW - surrogate model KW - posterior consistency KW - probabilistic numerics KW - uncertainty quantification KW - randomised misfit Y1 - 2018 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-66324 SN - 1438-0064 VL - 6 IS - 4 SP - 1600 EP - 1629 ER - TY - GEN A1 - Hiller, Benjamin A1 - Vredeveld, Tjark T1 - Probabilistic alternatives for competitive analysis N2 - In the last 20 years competitive analysis has become the main tool for analyzing the quality of online algorithms. Despite of this, competitive analysis has also been criticized: It sometimes cannot discriminate between algorithms that exhibit significantly different empirical behavior, or it even favors an algorithm that is worse from an empirical point of view. Therefore, there have been several approaches to circumvent these drawbacks. In this survey, we discuss probabilistic alternatives for competitive analysis. T3 - ZIB-Report - 11-55 KW - online algorithms KW - probabilistic analysis KW - competitive analysis KW - survey Y1 - 2012 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-15131 SN - 1438-0064 ER - TY - GEN A1 - Griewank, Andreas A1 - Streubel, Tom A1 - Lehmann, Lutz A1 - Hasenfelder, Richard A1 - Radons, Manuel T1 - Piecewise linear secant approximation via Algorithmic Piecewise Differentiation N2 - It is shown how piecewise differentiable functions \(F: R^n → R^m\) that are defined by evaluation programs can be approximated locally by a piecewise linear model based on a pair of sample points x̌ and x̂. We show that the discrepancy between function and model at any point x is of the bilinear order O(||x − x̌|| ||x − x̂||). This is a little surprising since x ∈ R^n may vary over the whole Euclidean space, and we utilize only two function samples F̌ = F(x̌) and F̂ = F(x̂), as well as the intermediates computed during their evaluation. As an application of the piecewise linearization procedure we devise a generalized Newton’s method based on successive piecewise linearization and prove for it sufficient conditions for convergence and convergence rates equaling those of semismooth Newton. We conclude with the derivation of formulas for the numerically stable implementation of the aforedeveloped piecewise linearization methods. T3 - ZIB-Report - 16-54 KW - Automatic differentiation KW - Computational graph KW - Lipschitz continuity KW - Generalized Hermite interpolation KW - ADOL-C Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-61642 SN - 1438-0064 ER - TY - GEN A1 - Wiebel, Alexander A1 - Vos, Frans M. A1 - Hege, Hans-Christian T1 - Perception-Oriented Picking of Structures in Direct Volumetric Renderings N2 - Radiologists from all application areas are trained to read slice-based visualizations of 3D medical image data. Despite the numerous examples of sophisticated three-dimensional renderings, especially all variants of direct volume rendering, such methods are often considered not very useful by radiologists who prefer slice-based visualization. Just recently there have been attempts to bridge this gap between 2D and 3D renderings. These attempts include specialized techniques for volume picking that result in repositioning slices. In this paper, we present a new volume picking technique that, in contrast to previous work, does not require pre-segmented data or metadata. The positions picked by our method are solely based on the data itself, the transfer function and, most importantly, on the way the volumetric rendering is perceived by viewers. To demonstrate the usefulness of the proposed method we apply it for automatically repositioning slices in an abdominal MRI scan, a data set from a flow simulation and a number of other volumetric scalar fields. Furthermore we discuss how the method can be implemented in combination with various different volumetric rendering techniques. T3 - ZIB-Report - 11-45 KW - DVR KW - picking KW - pointing KW - direct volume rendering Y1 - 2011 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-14343 SN - 1438-0064 ER -