## ZIB-Report

### Refine

#### Year of publication

#### Document Type

- ZIB-Report (1595)
- Doctoral Thesis (10)
- Habilitation (10)
- Article (1)

#### Keywords

- integer programming (33)
- KOBV (28)
- optimal control (27)
- mixed integer programming (26)
- Kooperativer Bibliotheksverbund Berlin-Brandenburg (24)
- Bibliotheksverbund (20)
- Integer Programming (17)
- Mixed Integer Programming (15)
- column generation (13)
- constraint integer programming (13)

#### Institute

- ZIB Allgemein (909)
- Mathematical Optimization (387)
- Numerical Mathematics (184)
- Visual Data Analysis (73)
- Mathematical Optimization Methods (61)
- Mathematics of Transportation and Logistics (61)
- Computational Medicine (58)
- Scientific Information (50)
- Distributed Algorithms and Supercomputing (33)
- Therapy Planning (20)

19-14

Branch-and-bound (B&B) is an algorithmic framework for solving NP-hard combinatorial optimization problems. Although several well-designed software frameworks for parallel B&B have been developed over the last two decades, there is very few literature about successfully solving previously intractable combinatorial optimization problem instances to optimality by using such frameworks.The main reason for this limited impact of parallel solvers is that the algorithmic improvements for specific problem types are significantly greater than performance gains obtained by parallelization in general. Therefore, in order to solve hard problem instances for the first time, one needs to accelerate state-of-the-art algorithm implementations. In this paper, we present a computational study for solving Steiner tree problems and mixed integer semidefinite programs in parallel. These state-of-the-art algorithm implementations are based on SCIP and were parallelized via the ug[SCIP-*,*]-libraries---by adding less than 200 lines of glue code. Despite the ease of their parallelization, these solvers have the potential to solve previously intractable instances. In this paper, we demonstrate the convenience of such a parallelization and present results for previously unsolvable instances from the well-known PUC benchmark set, widely regarded as the most difficult Steiner tree test set in the literature.

19-15

One of the most fundamental ingredients in mixed-integer nonlinear programming solvers is the well- known McCormick relaxation for a product of two variables x and y over a box-constrained domain. The starting point of this paper is the fact that the convex hull of the graph of xy can be much tighter when computed over a strict, non-rectangular subset of the box. In order to exploit this in practice, we propose to compute valid linear inequalities for the projection of the feasible region onto the x-y-space by solving a sequence of linear programs akin to optimization-based bound tightening. These valid inequalities allow us to employ results from the literature to strengthen the classical McCormick relaxation. As a consequence, we obtain a stronger convexification procedure that exploits problem structure and can benefit from supplementary information obtained during the branch-and bound algorithm such as an objective cutoff. We complement this by a new bound tightening procedure that efficiently computes the best possible bounds for x, y, and xy over the available projections. Our computational evaluation using the academic solver SCIP exhibit that the proposed methods are applicable to a large portion of the public test library MINLPLib and help to improve performance significantly.

19-06

We present a method for the automated segmentation of knee bones and cartilage from magnetic resonance imaging (MRI) that combines a priori knowledge of anatomical shape with Convolutional Neural Networks (CNNs).The proposed approach incorporates 3D Statistical Shape Models (SSMs) as well as 2D and 3D CNNs to achieve a robust and accurate segmentation of even highly pathological knee structures.The shape models and neural networks employed are trained using data from the Osteoarthritis Initiative (OAI) and the MICCAI grand challenge "Segmentation of Knee Images 2010" (SKI10), respectively. We evaluate our method on 40 validation and 50 submission datasets from the SKI10 challenge.For the first time, an accuracy equivalent to the inter-observer variability of human readers is achieved in this challenge.Moreover, the quality of the proposed method is thoroughly assessed using various measures for data from the OAI, i.e. 507 manual segmentations of bone and cartilage, and 88 additional manual segmentations of cartilage. Our method yields sub-voxel accuracy for both OAI datasets. We make the 507 manual segmentations as well as our experimental setup publicly available to further aid research in the field of medical image segmentation.In conclusion, combining localized classification via CNNs with statistical anatomical knowledge via SSMs results in a state-of-the-art segmentation method for knee bones and cartilage from MRI data.

19-13

In our chapter we are describing how to reconstruct three-dimensional anatomy from medical image data and how to build Statistical 3D Shape Models out of many such reconstructions yielding a new kind of anatomy that not only allows quantitative analysis of anatomical variation but also a visual exploration and educational visualization. Future digital anatomy atlases will not only show a static (average) anatomy but also its normal or pathological variation in three or even four dimensions, hence, illustrating growth and/or disease progression. Statistical Shape Models (SSMs) are geometric models that describe a collection of semantically similar objects in a very compact way. SSMs represent an average shape of many three-dimensional objects as well as their variation in shape. The creation of SSMs requires a correspondence mapping, which can be achieved e.g. by parameterization with a respective sampling. If a corresponding parameterization over all shapes can be established, variation between individual shape characteristics can be mathematically investigated. We will explain what Statistical Shape Models are and how they are constructed. Extensions of Statistical Shape Models will be motivated for articulated coupled structures. In addition to shape also the appearance of objects will be integrated into the concept. Appearance is a visual feature independent of shape that depends on observers or imaging techniques. Typical appearances are for instance the color and intensity of a visual surface of an object under particular lighting conditions, or measurements of material properties with computed tomography (CT) or magnetic resonance imaging (MRI). A combination of (articulated) statistical shape models with statistical models of appearance lead to articulated Statistical Shape and Appearance Models (a-SSAMs).After giving various examples of SSMs for human organs, skeletal structures, faces, and bodies, we will shortly describe clinical applications where such models have been successfully employed. Statistical Shape Models are the foundation for the analysis of anatomical cohort data, where characteristic shapes are correlated to demographic or epidemiologic data. SSMs consisting of several thousands of objects offer, in combination with statistical methods ormachine learning techniques, the possibility to identify characteristic clusters, thus being the foundation for advanced diagnostic disease scoring.

19-11

We propose a simple and general online method to measure the search progress within the Branch-and-Bound algorithm, from which we estimate the size of the remaining search tree. We then show how this information can help solvers algorithmically at runtime by designing a restart strategy for Mixed-Integer Programming (MIP) solvers that decides whether to restart the search based on the current estimate of the number of remaining nodes in the tree. We refer to this type of algorithm as clairvoyant.
Our clairvoyant restart strategy outperforms a state-of-the-art solver on a large set of publicly available MIP benchmark instances.
It is implemented in the MIP solver SCIP and will be available in future releases.

19-09

The fiber surface generalizes the popular isosurface to multi-fields, so that pre-images can be visualized as surfaces. As with the isosurface, however, the fiber surface suffers from visual occlusion. We propose to avoid such occlusion by restricting the components to only the relevant ones with a new component-wise flexing algorithm. The approach, flexible fiber surface, generalizes the manipulation idea found in the flexible isosurface for the fiber surface. The flexible isosurface in the original form, however, relies on the contour tree. For the fiber surface, this corresponds to the Reeb space, which is challenging for both the computation and user interaction. We thus take a Reeb-free approach, in which one does not compute the Reeb space. Under this constraint, we generalize a few selected interactions in the flexible isosurface and discuss the implication of the restriction.

17-50

In atmospheric sciences, sizes of data sets grow continuously due to increasing resolutions. A central task is the comparison of spatiotemporal fields, to assess different simulations and to compare simulations with observations. A significant information reduction is possible by focusing on geometric-topological features of the fields or on derived meteorological objects. Due to the huge size of the data sets, spatial features have to be extracted in time slices and traced over time. Fields with chaotic component, i.e. without 1:1 spatiotemporal correspondences, can be compared by looking upon statistics of feature properties. Feature extraction, however, requires a clear mathematical definition of the features – which many meteorological objects still lack. Traditionally, object extractions are often heuristic, defined only by implemented algorithms, and thus are not comparable. This work surveys our framework designed for efficient development of feature tracking methods and for testing new feature definitions. The framework supports well-established visualization practices and is being used by atmospheric researchers to diagnose and compare data.

19-08

Two essential ingredients of modern mixed-integer programming (MIP) solvers are diving heuristics that simulate a partial depth-first search in a branch-and-bound search tree and conflict analysis of infeasible subproblems to learn valid constraints. So far, these techniques have mostly been studied independently: primal heuristics under the aspect of finding high-quality feasible solutions early during the solving process and conflict analysis for fathoming nodes of the search tree and improving the dual bound. Here, we combine both concepts in two different ways. First, we develop a diving heuristic that targets the generation of valid conflict constraints from the Farkas dual. We show that in the primal this is equivalent to the optimistic strategy of diving towards the best bound with respect to the objective function. Secondly, we use information derived from conflict analysis to enhance the search of a diving heuristic akin to classical coefficient diving. The computational performance of both methods is evaluated using an implementation in the source-open MIP solver SCIP. Experiments are carried out on publicly available test sets including Miplib 2010 and Cor@l.

19-07

We introduce a concurrent solver for the periodic event scheduling problem (PESP). It combines mixed integer programming techniques, the modulo network simplex method, satisfiability approaches, and a new heuristic based on maximum cuts. Running these components in parallel speeds up the overall solution process. This enables us to significantly improve the current upper and lower bounds for all benchmark instances of the library PESPlib.

19-04

To solve optimization problems with parabolic PDE constraints, often methods working on the reduced objective functional are used. They are computationally expensive due to the necessity of solving both the state equation and a backward-in-time adjoint equation to evaluate the reduced gradient in each iteration of the optimization method. In this study, we investigate the use of the parallel-in-time method PFASST in the setting of PDE constrained optimization. In order to develop an efficient fully time-parallel algorithm we discuss different options for applying PFASST to adjoint gradient computation, including the possibility of doing PFASST iterations on both the state and adjoint equations simultaneously. We also explore the additional gains in efficiency from reusing information from previous optimization iterations when solving each equation. Numerical results for both a linear and a non-linear reaction-diffusion optimal control problem demonstrate the parallel speedup and efficiency of different approaches.