## 68-XX COMPUTER SCIENCE (For papers involving machine computations and programs in a specific mathematical area, see Section -04 in that area)

### Refine

#### Year of publication

#### Document Type

- ZIB-Report (32)
- Master's Thesis (3)
- Article (2)
- Bachelor's Thesis (1)

#### Is part of the Bibliography

- no (38) (remove)

#### Keywords

#### Institute

- Mathematical Optimization (19)
- Visual Data Analysis (8)
- Mathematics of Telecommunication (7)
- Distributed Algorithms and Supercomputing (4)
- Numerical Mathematics (4)
- Mathematical Optimization Methods (3)
- Computational Molecular Design (2)
- Image Analysis in Biology and Materials Science (2)
- Scientific Information (2)
- Visual Data Analysis in Science and Engineering (2)

SCIP-JACK is a customized, branch-and-cut based solver for Steiner tree and related problems. ug [SCIP-JACK, MPI] extends SCIP-JACK to a massively par- allel solver by using the Ubiquity Generator (UG) framework. ug [SCIP-JACK, MPI] was the only solver that could run on a distributed environment at the (latest) 11th DIMACS Challenge in 2014. Furthermore, it could solve three well-known open instances and updated 14 best known solutions to instances from the bench- mark libary STEINLIB. After the DIMACS Challenge, SCIP-JACK has been con- siderably improved. However, the improvements were not reflected on ug [SCIP- JACK, MPI]. This paper describes an updated version of ug [SCIP-JACK, MPI], especially branching on constrains and a customized racing ramp-up. Furthermore, the different stages of the solution process on a supercomputer are described in detail. We also show the latest results on open instances from the STEINLIB.

We consider the use of randomised forward models and log-likelihoods within the Bayesian approach to inverse problems. Such random approximations to the exact forward model or log-likelihood arise naturally when a computationally expensive model is approximated using a cheaper stochastic surrogate, as in Gaussian process emulation (kriging), or in the field of probabilistic numerical methods. We show that the Hellinger distance between the exact and approximate Bayesian posteriors is bounded by moments of the difference between the true and approximate log-likelihoods. Example applications of these stability results are given for randomised misfit models in large data applications and the probabilistic solution of ordinary differential equations.

Mixed integer linear programming (MIP) is a general form to model combinatorial optimization problems and has many industrial applications. The performance of MIP solvers has improved tremendously in the last two decades and these solvers have been used to solve many real-word problems. However, against the backdrop of modern computer technology, parallelization is of pivotal importance. In this way, ParaSCIP is the most successful parallel MIP solver in terms of solving previously unsolvable instances from the well-known benchmark instance set MIPLIB by using supercomputers. It solved two instances from MIPLIB2003 and 12 from MIPLIB2010 for the first time to optimality by using up to 80,000 cores on supercomputers. ParaSCIP has been developed by using the Ubiquity Generator (UG) framework, which is a general software package to parallelize any state-of-the-art branch-and-bound based solver. This paper discusses 7 years of progress in parallelizing branch-and-bound solvers with UG.

PIPS-SBB is a distributed-memory parallel solver with a scalable data distribution paradigm. It is designed to solve MIPs with a dual-block angular structure, which is characteristic of deterministic-equivalent Stochastic Mixed-Integer Programs (SMIPs). In this paper, we present two different parallelizations of Branch & Bound (B&B), implementing both as extensions of PIPS-SBB, thus adding an additional layer of parallelism. In the first of the proposed frameworks, PIPS-PSBB, the coordination and load-balancing of the different optimization workers is done in a decentralized fashion. This new framework is designed to ensure all available cores are processing the most promising parts of the B&B tree. The second, ug[PIPS-SBB,MPI], is a parallel implementation using the Ubiquity Generator (UG), a universal framework for parallelizing B&B tree search that has been successfully applied to other MIP solvers. We show the effects of leveraging multiple levels of parallelism in potentially improving scaling performance beyond thousands of cores.

The goal of quantitative photoacoustic tomography (qPAT) is to recover maps of the chromophore distributions from multiwavelength images of the initial pressure. Model-based inversions that incorporate the physical processes underlying the photoacoustic (PA) signal generation represent a promising approach. Monte-Carlo models of the light transport are computationally expensive, but provide accurate fluence distributions predictions, especially in the ballistic and quasi-ballistic regimes. Here, we focus on the inverse problem of 3D qPAT of blood oxygenation and investigate the application of the Monte-Carlo method in a model-based inversion scheme. A forward model of the light transport based on the MCX simulator and acoustic propagation modeled by the k-Wave toolbox was used to generate a PA image data set acquired in a tissue phantom over a planar detection geometry. The combination of the optical and acoustic models is shown to account for limited-view artifacts. In addition, the errors in the fluence due to, for example, partial volume artifacts and absorbers immediately adjacent to the region of interest are investigated. To accomplish large-scale inversions in 3D, the number of degrees of freedom is reduced by applying image segmentation to the initial pressure distribution to extract a limited number of regions with homogeneous optical parameters. The absorber concentration in the tissue phantom was estimated using a coordinate descent parameter search based on the comparison between measured and modeled PA spectra. The estimated relative concentrations using this approach lie within 5 % compared to the known concentrations. Finally, we discuss the feasibility of this approach to recover the blood oxygenation from experimental data.

In this paper, the concepts and design for an efficient information service for mathematical software and further mathematical research data are presented. The publication-based approach and the Web-based approach are the main building blocks of the service and will be discussed. Heuristic methods are used for identification, extraction, and ranking of information about software and other mathematical research data. The methods provide not only information about the research data but also link software and mathematical research data to the scientific context.

An automatic adaptive importance sampling algorithm for molecular dynamics in reaction coordinates
(2017)

In this article we propose an adaptive importance sampling scheme for dynamical quantities of high dimensional complex systems which are metastable. The main idea of this article is to combine a method coming from Molecular Dynamics Simulation, Metadynamics, with a theorem from stochastic analysis, Girsanov's theorem. The proposed algorithm has two advantages compared to a standard estimator of dynamic quantities: firstly, it is possible to produce estimators with a lower variance and, secondly, we can speed up the sampling. One of the main problems for building importance sampling schemes for metastable systems is to find the metastable region in order to manipulate the potential accordingly. Our method circumvents this problem by using an assimilated version of the Metadynamics algorithm and thus creates a non-equilibrium dynamics which is used to sample the equilibrium quantities.

Ancient Egyptian papyri are often folded, rolled up or kept as small packages, sometimes even sealed. Physically unrolling or unfolding these packages might severely damage them. We demonstrate a way to get access to the hidden script without physical unfolding by employing computed tomography and mathematical algorithms for virtual unrolling and unfolding. Our algorithmic approaches are combined with manual interaction. This provides the necessary flexibility to enable the unfolding of even complicated and partly damaged papyrus packages. In addition, it allows us to cope with challenges posed by the structure of ancient papyrus, which is rather irregular, compared to other writing substrates like metallic foils or parchment. Unfolding of packages is done in two stages. In the first stage, we virtually invert the physical folding process step by step until the partially unfolded package is topologically equivalent to a scroll or a papyrus sheet folded only along one fold line. To minimize distortions at this stage, we apply the method of moving least squares. In the second stage, the papyrus is simply flattened, which requires the definition of a medial surface. We have applied our software framework to several papyri. In this work, we present the results of applying our approaches to mockup papyri that were either rolled or folded along perpendicular fold lines. In the case of the folded papyrus, our approach represents the first attempt to address the unfolding of such complicated folds.

It is shown how piecewise differentiable functions \(F: R^n → R^m\) that are defined by evaluation programs can be approximated locally by a piecewise linear model based on a pair of sample points x̌ and x̂. We show that the discrepancy between function and model at any point x is of the bilinear order O(||x − x̌|| ||x − x̂||). This is a little surprising since x ∈ R^n may vary over the whole Euclidean space, and we utilize only two function samples F̌ = F(x̌) and F̂ = F(x̂), as well as the intermediates computed during their evaluation. As an application of the piecewise linearization procedure we devise a generalized Newton’s method based on successive piecewise linearization and prove for it sufficient conditions for convergence and convergence rates equaling those of semismooth Newton. We conclude with the derivation of formulas for the numerically stable implementation of the aforedeveloped piecewise linearization methods.

Neural circuit mapping is generating datasets of 10,000s of labeled neurons. New computational tools are needed to search and organize these data. We present NBLAST, a sensitive and rapid algorithm, for measuring pairwise neuronal similarity. NBLAST considers both position and local geometry, decomposing neurons into short segments; matched segments are scored using a probabilistic scoring matrix defined by statistics of matches and non-matches.
We validated NBLAST on a published dataset of 16,129 single Drosophila neurons. NBLAST can distinguish neuronal types down to the finest level (single identified neurons) without a priori information. Cluster analysis of extensively studied neuronal classes identified new types and unreported topographical features. Fully automated clustering organized the validation dataset into 1052 clusters, many of which map onto previously described neuronal types. NBLAST supports additional query types including searching neurons against transgene expression patterns. Finally we show that NBLAST is effective with data from other invertebrates and zebrafish.

This paper describes how we solved 12 previously unsolved mixed-integer program-
ming (MIP) instances from the MIPLIB benchmark sets. To achieve these results we
used an enhanced version of ParaSCIP, setting a new record for the largest scale MIP
computation: up to 80,000 cores in parallel on the Titan supercomputer. In this paper
we describe the basic parallelization mechanism of ParaSCIP, improvements of the
dynamic load balancing and novel techniques to exploit the power of parallelization
for MIP solving. We give a detailed overview of computing times and statistics for
solving open MIPLIB instances.

Rising traffic in telecommunication networks lead to rising energy costs for the network operators. Meanwhile, increased flexibility of the networking hardware may help to realize load-adaptive operation of the networks to cut operation costs. To meet network operators’ concerns over stability, we propose to switch network configurations only a limited number of times per day. We present a method for the integrated computation of optimal switching times and network configurations that alternatingly solves mixed-integer programs and constrained shortest cycle problems in a certain graph. Similarly to the Branch & Bound Algorithm, it uses lower and upper bounds on the optimum value and allows for pivoting strategies to guide the computation and avoid the solution of irrelevant subproblems. The algorithm can act as a framework to be adapted and applied to suitable problems of different origin.

Time series classification mimics the human understanding of similarity. When it comes to larger datasets, state of the art classifiers reach their limits in terms of unreasonable training or testing times. One representative example is the 1-nearest-neighbor DTW classifier (1-NN DTW) that is commonly used as the benchmark to compare to and has several shortcomings: it has a quadratic time and it degenerates in the presence of noise. To reduce the computational complexity lower bounding techniques or recently a nearest centroid classifier have been introduced. Still, execution times to classify moderately sized datasets on a single core are in the order of hours. We present our Bag-Of-SFA-Symbols in Vector Space (BOSS VS) classifier that is robust and accurate due to invariance to noise, phase shifts, offsets, amplitudes and occlusions. We show that it is as accurate while being multiple orders of magnitude faster than state of the art classifiers. Using the BOSS VS allows for mining massive time series datasets and real-time analytics.

Many optimization problems can be modeled as Mixed Integer Programs (MIPs). In general, MIPs cannot be solved efficiently, since solving MIPs is NP-hard, see, e.g., Schrijver, 2003. Common methods for solving NP-hard problems are branch-and-bound and column generation. In the case of column generation, the original problem
becomes decomposed or re-formulated into one ore more smaller subproblems, which are easier to solve. Each of these subproblems is solved separately and recurrently, which can be interpreted as solving a sequence of optimization problems.
In this thesis, we consider a sequence of MIPs which only differ in the respective objective functions. Furthermore, we assume each of these MIPs get solved with a branch-and-bound algorithm. This thesis aims to figure out whether the solving process of a given sequence of MIPs can be accelerated by reoptimization. As reoptimization we understand starting the solving process
of a MIP of this sequence at a given frontier of a search tree corresponding to another MIP of this sequence.
At the beginning we introduce an LP-based branch-and-bound algorithm. This algorithm is inspired by the reoptimizing algorithm of Hiller, Klug, and the author of this
thesis, 2013. Since most of the state-of-the-art MIP
solvers come to decisions based on dual information, which leads to the loss of feasible solutions after changing the objective function, we present a technique to guarantee optimality despite using these information. A decision is based on a dual information if this decision is valid for at least one feasible solution, whereas a decision is based on a primal information if this decision is valid for all feasible solutions. Afterwards, we consider representing the search frontier of the tree by a set of nodes of a given size. We call this the Tree Compression Problem. Moreover, we present a criterion characterizing the similarity of two objective functions. To evaluate our approach of reoptimization we extend the well-known and well-maintained MIP solver SCIP to an LP-based branch-and-bound framework, introduce two heuristics for solving the Tree Compression Problem, and a primal heuristic which is especially fitted to column generation. Finally, we present computational experiments on several problem classes, e.g., the Vertex Coloring and k-Constrained Shortest Path. Our experiments show, that a straightforward reoptimization, i.e., without additional heuristics, provides no benefit in general. However, in combination with the techniques and methods presented in this thesis, we can accelerate the solving of a given sequence up to the factor 14. For this purpose it is essential to take the differences of the objective functions into account and to restart the reoptimization, i.e., solve the subproblem from scratch, if the objective functions are not similar enough. Finally, we discuss the possibility to parallelize the solving process of the search frontier at the beginning of each solving process.

Reversible Markov chains are the basis of many applications. However, computing transition probabilities by a finite sampling of a Markov chain can lead to truncation errors. Even if the original Markov chain is reversible, the approximated Markov chain might be non-reversible and will lose important properties, like the real valued spectrum. In this paper, we show how to find the closest reversible Markov chain to a given transition matrix. It turns out that this matrix can be computed by solving a convex minimization problem.

Tracing microtubule centerlines in serial section electron tomography requires microtubules to be stitched across sections, that is lines from different sections need to be aligned, endpoints need to be matched at section boundaries to establish a correspondence between neighboring sections, and corresponding lines need to be connected across multiple sections. We present computational methods for these tasks: 1) An initial alignment is computed using a distance compatibility graph. 2) A fine alignment is then computed with a probabilistic variant of the iterative closest points algorithm, which we extended to handle the orientation of lines by introducing a periodic random variable to the probabilistic formulation. 3) Endpoint correspondence is established by formulating a matching problem in terms of a Markov random field and computing the best matching with belief propagation. Belief propagation is not generally guaranteed to converge to a minimum. We show how convergence can be achieved, nonetheless, with minimal manual input. In addition to stitching microtubule centerlines, the correspondence is also applied to transform and merge the electron tomograms. We applied the proposed methods to samples from the mitotic spindle in C. elegans, the meiotic spindle in X. laevis, and sub-pellicular microtubule arrays in T. brucei. The methods were able to stitch microtubules across section boundaries in good agreement with experts’ opinions for the spindle samples. Results, however, were not satisfactory for the microtubule arrays. For certain experiments, such as an analysis of the spindle, the proposed methods can replace manual expert tracing and thus enable the analysis of microtubules over long distances with reasonable manual effort.

We provide an overview of new theoretical results that we obtained while further investigating multiband robust optimization, a new model for robust optimization that we recently proposed to tackle uncertainty in mixed-integer linear programming. This new model extends and refines the classical Gamma-robustness model of Bertsimas and Sim and is particularly useful in the common case of arbitrary asymmetric distributions of the uncertainty. Here, we focus on uncertain 0-1 programs and we analyze their robust counterparts when the uncertainty is represented through a multiband set. Our investigations were inspired by the needs of our industrial partners in the research project ROBUKOM.

We investigate the Robust Multiperiod Network Design Problem, a generalization of the classical Capacitated Network Design Problem that additionally considers multiple design periods and provides solutions protected against traffic uncertainty.
Given the intrinsic difficulty of the problem, which proves challenging even for state-of-the art commercial solvers, we propose a hybrid primal heuristic based on the combination of ant colony optimization and an exact large neighborhood search. Computational experiments on a set of realistic instances from the SNDlib show that our heuristic can find solutions of extremely good quality with low optimality gap.

A central assumption in classical optimization is that all the input data of a
problem are exact. However, in many real-world problems, the input data are subject
to uncertainty. In such situations, neglecting uncertainty may lead to nominally optimal
solutions that are actually suboptimal or even infeasible. Robust optimization
offers a remedy for optimization under uncertainty by considering only the subset of
solutions protected against the data deviations. In this paper, we provide an overview
of the main theoretical results of multiband robustness, a new robust optimization
model that extends and refines the classical theory introduced by Bertsimas and Sim.
After introducing some new results for the special case of pure binary programs, we
focus on the harvest scheduling problem and show how multiband robustness can
be adopted to tackle the uncertainty affecting the volume of produced timber and
grant a reduction in the price of robustness.