Refine
Year of publication
- 2020 (224) (remove)
Document Type
- Article (122)
- In Proceedings (35)
- ZIB-Report (30)
- Master's Thesis (9)
- Book chapter (7)
- Book (4)
- Doctoral Thesis (4)
- Other (4)
- In Collection (3)
- Bachelor's Thesis (2)
- Research data (2)
- Proceedings (1)
- Software (1)
Keywords
Institute
- Modeling and Simulation of Complex Processes (71)
- Numerical Mathematics (67)
- Visual and Data-centric Computing (37)
- Visual Data Analysis (36)
- Applied Algorithmic Intelligence Methods (33)
- Computational Nano Optics (32)
- Mathematical Optimization (32)
- Mathematical Algorithmic Intelligence (30)
- Distributed Algorithms and Supercomputing (27)
- AI in Society, Science, and Technology (19)
A robust optimal design method of energy supply systems under uncertain energy demands has been proposed using a mixed-integer linear model for constituent equipment. However, this method takes a long computation time, and thus it can be applied only to small-scale problems. In this paper, a hierarchical optimization method is applied to two types of optimization problems for evaluating robustness to solve them efficiently. In a case study, the proposed method is applied to a cogeneration system with a complex configuration, and the validity and effectiveness of the method are ascertained.
A robust optimal design method of energy supply systems under uncertain energy demands has been proposed using a mixed- integer linear model for constituent equipment. A robust optimal design problem has been formulated as a three-level min-max- min optimization one by expressing uncertain energy demands by intervals, evaluating the robustness in a performance criterion based on the minimax regret criterion, and considering hierarchical relationships among design variables, uncertain energy demands, and operation variables. Since this problem must be solved by a special algorithm and is too difficult to solve even using a commercial solver, a hierarchical optimization approach has been applied to solve the problem but its application is limited only to small scale toy problems. In this paper, some strategies are introduced into the hierarchical optimization approach to enhance the computation efficiency for the purpose of applying the approach to large scale practical problems. In a case study, the proposed approach is applied to the robust optimal design of a cogeneration system with a complex configuration, and the validity and effectiveness of the method are ascertained.
To attain the highest performance of energy supply systems, it is necessary to determine design specifications optimally in consideration of operational strategies corresponding to seasonal and hourly variations in energy demands. A hierarchical mixed-integer linear programming method has been proposed to solve such an optimal design problem efficiently. In this paper, a method of reducing model by clustering periods with the k-medoids method is applied to the relaxed optimal design problem at the upper level. Through a case study, it is clarified how the proposed method is effective to enhance the computation efficiency in a large scale optimal design problem.
To attain the highest performance of energy supply systems, it is necessary to determine design specifications optimally in consideration of operational strategies corresponding to seasonal and hourly variations in energy demands. Mixed-integer linear programming (MILP) methods have been applied widely to such optimal design problems. A hierarchical MILP method has been proposed to solve the problems very efficiently. In addition, by utilizing features of the hierarchical MILP method, a method of reducing model by clustering periods based on the optimal operational strategies of equipment has been proposed to search design solution candidates efficiently in the relaxed optimal design problem at the upper level. In this paper, these methods are applied to the multiobjective optimal design of a cogeneration system by considering the annual total cost and primary energy consumption as the objective functions to be minimized. Through a case study, it turns out that the model reduction by the operation-based time-period clustering is effective in terms of the computation efficiency when importance is given to the first objective function, while it is not when importance is given to the second objective function.
Von Graphen zu Hypergraphen
(2020)
The historical importance of ancient manuscripts is unique since they provide information about the heritage of ancient cultures. Often texts are hidden in rolled or folded documents. Due to recent impro- vements in sensitivity and resolution, spectacular disclosures of rolled hidden texts were possible by X-ray tomography. However, revealing text on folded manuscripts is even more challenging. Manual unfolding is often too risky in view of the fragile condition of fragments, as it can lead to the total loss of the document. X-ray tomography allows for virtual unfolding and enables non-destructive access to hid- den texts. We have recently demonstrated the procedure and tested unfolding algorithms on a mockup sample. Here, we present results on unfolding ancient papyrus packages from the papyrus collection of the Musée du Louvre, among them objects folded along approximately orthogonal folding lines. In one of the packages, the first identification of a word was achieved, the Coptic word for “Lord”.
One of the most fundamental ingredients in mixed-integer nonlinear programming solvers is the well- known McCormick relaxation for a product of two variables x and y over a box-constrained domain. The starting point of this paper is the fact that the convex hull of the graph of xy can be much tighter when computed over a strict, non-rectangular subset of the box. In order to exploit this in practice, we propose to compute valid linear inequalities for the projection of the feasible region onto the x-y-space by solving a sequence of linear programs akin to optimization-based bound tightening. These valid inequalities allow us to employ results from the literature to strengthen the classical McCormick relaxation. As a consequence, we obtain a stronger convexification procedure that exploits problem structure and can benefit from supplementary information obtained during the branch-and bound algorithm such as an objective cutoff. We complement this by a new bound tightening procedure that efficiently computes the best possible bounds for x, y, and xy over the available projections. Our computational evaluation using the academic solver SCIP exhibit that the proposed methods are applicable to a large portion of the public test library MINLPLib and help to improve performance significantly.
We present the tamper-resistant broadcast abstraction of the Bitcoin blockchain, and show how it can be used to implement tamper-resistant replicated state machines. The tamper-resistant broadcast abstraction provides functionality to: broadcast, deliver, and verify messages. The tamper-resistant property ensures: 1) the probabilistic protection against byzantine behaviour, and 2) the probabilistic verifiability that no tampering has occurred.
In this work, we study various tamper-resistant broadcast protocols for: different environmental models (public/permissioned, bounded/unbounded, byzantine fault tolerant (BFT)/non-BFT, native/non-native); as well as different properties, such as ordering guarantees (FIFO-order, causal-order, total-order), and delivery guarantees (validity, agreement, uniform). This way, we can match the protocol to the required environment model and consistency model of the replicated state machine.
We implemented the tamper-resistant broadcast abstraction as a proof of concept. The results show that the implemented tamper-resistant broadcast protocols can compete on throughput and latency with other state-of-the-art broadcast technologies. Use cases, such as a tamper-resistant file system, supply chain tracking, and a timestamp server highlight the expressiveness of the abstraction.
In conclusion, the tamper-resistant broadcast protocols provide a powerful interface, with clear semantics and tunable settings, enabling the design of tamper-resistant applications.
UG is a generic framework to parallelize branch-and-bound based solvers (e.g., MIP, MINLP, ExactIP) in a distributed or shared memory computing environment. It exploits the powerful performance of state-of-the-art "base solvers", such as SCIP, CPLEX, etc. without the need for base solver parallelization.
UG framework, ParaSCIP(ug[SCIP,MPI]) and FiberSCIP (ug[SCIP,Pthreads]) are available as a beta version. For MIP solving, ParaSCIP and FiberSCIP are well debugged and should be stable. For MINLP solving, they are relatively stable, but not as thoroughly debugged. This release version should handle branch-and-cut approaches where subproblems are defined by variable bounds and also by constrains for ug[SCIP,*] ParaSCIP and FiberSCIP). Therefore, problem classes other than MIP or MINLP can be handled, but they have not been tested yet.
v0.9.1: Update orbitope cip files.
In state-of-the-art mixed-integer programming solvers, a large array of reduction techniques are applied to simplify the problem and strengthen the model formulation before starting the actual branch-and-cut phase. Despite their mathematical simplicity, these methods can have significant impact on the solvability of a given problem. However, a crucial property for employing presolve techniques successfully is their speed. Hence, most methods inspect constraints or variables individually in order to guarantee linear complexity. In this paper, we present new hashing-based pairing mechanisms that help to overcome known performance limitations of more powerful presolve techniques that consider pairs of rows or columns. Additionally, we develop an enhancement to one of these presolve techniques by exploiting the presence of set-packing structures on binary variables in order to strengthen the resulting reductions without increasing runtime. We analyze the impact of these methods on the MIPLIB 2017 benchmark set based on an implementation in the MIP solver SCIP.
In graphical representations of public transportation networks, there is often some degree of uncertainty in the arc values, due to delays or transfer times. This uncertainty can be expressed as a parameterized weight on the transfer arcs. Classical shortest path algorithms often have difficulty handling parameterized arc weights and a tropical geometry approach has been shown as a possible solution. The connection between the classical shortest path problem and tropical geometry is well establish: Tropically multiplying the n × n adjacency matrix of a graph with itself n − 1 times results in the so-called Kleene star, and is a matrix-form solution to the all-pairs shortest path problem. Michael Joswig and Benjamin Schröter showed in their paper The Tropical Geometry of Shortest Paths that the same method can be used to find the solution to the all-pairs shortest path problem even in the case of variable arc weights and they proposed an algorithm to solve the single-target shortest path problem in such a case. The solution takes the form of a polyhedral subdivision of the parameter space. As the number of variable arc weights grows, the time needed to execute an implementation of this algorithm grows exponentially. As the size of a public transportation network grows, the number of variable arc weights grows exponentially as well. However, it has been observed that in public transportation networks, there are usually only a few possible shortest routes. Geometrically, this means that there should be few polyhedra in the polyhedral subdivision. This algorithm is used on an example of a real-world public transportation network and an analysis of the polyhedral subdivision is made. Then a geometrical approach is used to analyze the impact of limiting the number of transfers, and thereby limiting the number of parameterized arcs used, as an estimation of the solution to the all-pairs shortest path problem
Transient capture of electrons in magnetic fields, or: comets in the restricted three-body problem
(2020)
The motion of celestial bodies in astronomy is closely related to the orbits
of electrons encircling an atomic nucleus. Bohr and Sommerfeld presented a
quantization scheme of the classical orbits to analyze the eigenstates of the
hydrogen atom. Here we discuss another close connection of classical
trajectories and quantum mechanical states: the transient dynamics of objects
around a nucleus. In this setup a comet (or an electron) is trapped for a while
in the vicinity of parent object (Jupiter or an atomic nucleus), but eventually
escapes after many revolutions around the center of attraction.
We present a transductive learning approach for morphometric osteophyte grading based on geometric deep learning. We formulate the grading task as semi-supervised node classification problem on a graph embedded in shape space. To account for the high-dimensionality and non-Euclidean structure of shape space we employ a combination of an intrinsic dimension reduction together with a graph convolutional neural network. We demonstrate the performance of our derived classifier in comparisons to an alternative extrinsic approach.
We present time-space trade-offs for computing the Euclidean minimum spanning tree of a set S of n point-sites in the plane. More precisely, we assume that S resides in a random-access memory that can only be read. The edges of the Euclidean minimum spanning tree EMST(S) have to be reported sequentially, and they cannot be accessed or modified afterwards. There is a parameter s in {1, ..., n} so that the algorithm may use O(s) cells of read-write memory (called the workspace) for its computations. Our goal is to find an algorithm that has the best possible running time for any given s between 1 and n.
We show how to compute EMST(S) in O(((n^3)/(s^2)) log s) time with O(s) cells of workspace, giving a smooth trade-off between the two best-known bounds O(n^3) for s = 1 and O(n log n) for s = n. For this, we run Kruskal's algorithm on the "relative neighborhood graph" (RNG) of S. It is a classic fact that the minimum spanning tree of RNG(S) is exactly EMST(S). To implement Kruskal's algorithm with O(s)
cells of workspace, we define s-nets, a compact representation of planar graphs. This allows us to efficiently maintain and update the components of the current minimum spanning forest as the edges are being inserted.
We consider the theoretical model of Bergmann and Lebowitz for open systems out of equilibrium and translate its principles in the adaptive resolution simulation molecular dynamics technique. We simulate Lennard-Jones fluids with open boundaries in a thermal gradient and find excellent agreement of the stationary responses with the results obtained from the simulation of a larger locally forced closed system. The encouraging results pave the way for a computational treatment of open systems far from equilibrium framed in a well-established theoretical model that avoids possible numerical artifacts and physical misinterpretations.
The SCIP Optimization Suite provides a collection of software packages for
mathematical optimization centered around the constraint integer programming frame-
work SCIP. This paper discusses enhancements and extensions contained in version 7.0
of the SCIP Optimization Suite. The new version features the parallel presolving library
PaPILO as a new addition to the suite. PaPILO 1.0 simplifies mixed-integer linear op-
timization problems and can be used stand-alone or integrated into SCIP via a presolver
plugin. SCIP 7.0 provides additional support for decomposition algorithms. Besides im-
provements in the Benders’ decomposition solver of SCIP, user-defined decomposition
structures can be read, which are used by the automated Benders’ decomposition solver
and two primal heuristics. Additionally, SCIP 7.0 comes with a tree size estimation
that is used to predict the completion of the overall solving process and potentially
trigger restarts. Moreover, substantial performance improvements of the MIP core were
achieved by new developments in presolving, primal heuristics, branching rules, conflict
analysis, and symmetry handling. Last, not least, the report presents updates to other
components and extensions of the SCIP Optimization Suite, in particular, the LP solver
SoPlex and the mixed-integer semidefinite programming solver SCIP-SDP.
The Periodic Event Scheduling Problem is a well-studied NP-hard problem with applications in public transportation to find good periodic timetables. Among the most powerful heuristics to solve the periodic timetabling problem is the modulo network simplex method. In this paper, we consider the more difficult version with integrated passenger routing and propose a refined integrated variant to solve this problem on real-world-based instances.
The aim of this thesis is to deepen our understand of how
IDA* heuristics influence the number of nodes expanded during
search. To this end, we develop Korf's formula for the number
of expanded nodes into a heuristic quality η which
expresses the quality of a heuristic function as a constant factor
on the number of expanded nodes, independent of a particular problem
instance.
We proceed to show how to compute η for some common kinds of
heuristics and how to estimate η by means of a random sample for
arbitrary heuristics. Using the value of η for some concrete
examples, we then inspect for which parts of the search space the
values of h(v) are particularly critical to the performance of the
heuristic, allowing us to build better heuristics for future problems.
This report originally appeared as a master thesis at Humboldt
University of Berlin.
In this paper, we introduce the Maximum Diversity Assortment Selection Problem (MADASS), which is a generalization of the 2-dimensional Cutting Stock Problem (2CSP). Given a set of rectangles and a rectangular container, the goal of 2CSP is to determine a subset of rectangles that can be placed in the container without overlapping, i.e., a feasible assortment, such that a maximum area is covered. In MADASS, we need to determine a set of feasible assortments, each of them covering a certain minimum threshold of the container, such that the diversity among them is maximized. Thereby, diversity is defined as minimum or average normalized Hamming-Distance of all assortment pairs. The MADASS Problem was used in the 11th AIMMS-MOPTA Competition in 2019. The methods we describe in this article and the computational results won the contest.
In the following, we give a definition of the problem, introduce a mathematical model and solution approaches, determine upper bounds on the diversity, and conclude with computational experiments conducted on test instances derived from the 2CSP literature.
The coma of comet 67P/Churyumov-Gerasimenko has been probed by the Rosetta spacecraft and shows a variety of different molecules. The ROSINA COmet Pressure Sensor and the Double Focusing Mass Spectrometer provide in-situ densities for many volatile compounds including the 14 gas species H2O, CO2, CO, H2S, O2, C2H6, CH3OH, H2CO, CH4, NH3, HCN, C2H5OH, OCS, and CS2. We fit the observed densities during the entire comet mission between August 2014 and September 2016 to an inverse coma model. We retrieve surface emissions on a cometary shape with 3996 triangular elements for 50 separated time intervals. For each gas we derive systematic error bounds and report the temporal evolution of the production, peak production, and the time-integrated total production. We discuss the production for the two lobes of the nucleus and for the northern and southern hemispheres. Moreover we provide a comparison of the gas production with the seasonal illumination.
It is a challenging task to fairly compare local solvers and heuristics against each other and against global solvers. How does one weigh a faster termination time against a better quality of the found solution? In this paper, we introduce the confined primal integral, a new performance measure that rewards a balance of speed and solution quality. It emphasizes the early part of the solution process by using an exponential decay. Thereby, it avoids that the order of solvers can be inverted by choosing an arbitrarily large time limit. We provide a closed analytic formula to compute the confined primal integral a posteriori and an incremental update formula to compute it during the run of an algorithm. For the latter, we show that we can drop one of the main assumptions of the primal integral, namely the knowledge of a fixed reference solution to compare against. Furthermore, we prove that the confined primal integral is a transitive measure when comparing local solves with different final solution values. Finally, we present a computational experiment where we compare a local MINLP solver that uses certain classes of cutting planes against a solver that does not. Both versions show very different tendencies w.r.t. average running time and solution quality, and we use the confined primal integral to argue which of the two is the preferred setting.
Die Bestimmung optimaler Treiberfunktionen von Lautsprechern in beliebiger Anordnung ist ein schlecht gestelltes Optimierungsproblem, da die Anzahl der Quellen immer erheblich kleiner als die Anzahl der Empfänger ist. Typischerweise werden frequenz-basierte Löser eingesetzt.
In einem vorangegangenen Beitrag haben die Autoren einen Adjungierten-basierten Ansatz vorgestellt, um optimale Treiberfunktionen im Zeitbereich zu bestimmen. Die Methode erlaubt es, inhomogene Windprofile und Temperaturschichtungen einzubeziehen, die typischerweise bei auf der Wellengleichung beruhenden Lösungen nicht berücksichtigt werden. Daru ̈ber hinaus können komplexe Geometrien und Randbedingungen berücksichtigt werden.
Bisher war diese Methode auf Monopolquellen beschränkt. Hier stellen die Autoren eine Erweiterung des Ansatzes in Bezug auf eine Adjungierten-basierte Monopolsynthese vor, die es ermöglicht, auch Quellen mit komplexen Richtcharakteristiken zu betrachten. Das Verfahren wird für typische Quellsignale und ein repräsentatives Lautsprechermodell mit komplexer Richtcharakteristik validiert.
The growing importance of mathematical software in everyday life—in applications such as internet communication, traffic, and artificial intelligence—necessitates advances in software documentation services to raise awareness of existing packages and their usage. Such information helps potential software developers and users make informed choices about packages that could advance their work in modeling, simulation, and analysis. At the same time, software presents novel challenges to information services that require the development of new methods and means of processing.
swMATH provides users with an overview of a broad range of mathematical software and extends documentation services for publications related to such software. It acts as a counterpart to the established abstracting and reviewing services for mathematical publications and has nearly 30,000 entries, making it one of the most comprehensive documentation services in mathematics.
Automatic recognition of surgical phases is an important component for developing an intra-operative context-aware system. Prior work in this area focuses on recognizing short-term tool usage patterns within surgical phases. However, the difference between intra- and inter-phase tool usage patterns has not been investigated for automatic phase recognition. We developed a Recurrent Neural Network (RNN), in particular a state-preserving Long Short Term Memory (LSTM) architecture to utilize the long-term evolution of tool usage within complete surgical procedures. For fully automatic tool presence detection from surgical video frames, a Convolutional Neural Network (CNN) based architecture namely ZIBNet is employed. Our proposed approach outperformed EndoNet by 8.1% on overall precision for phase detection tasks and 12.5% on meanAP for tool recognition tasks.
A new ion mobility (IM) spectrometer, enabling mobility measurements in the pressure range between 5 and 500 mbar and in the reduced field strength range E/N of 5–90 Td, was developed and characterized. Reduced mobility (K0) values were studied under low E/N (constant value) as well as high E/N (deviation from low field K0) for a series of molecular ions in nitrogen. Infrared matrix-assisted laser desorption ionization (IR-MALDI) was used in two configurations: a source working at atmospheric pressure (AP) and, for the first time, an IR-MALDI source working with a liquid (aqueous) matrix at sub-ambient/reduced pressure (RP). The influence of RP on IR-MALDI was examined and new insights into the dispersion process were gained. This enabled the optimization of the IM spectrometer for best analytical performance. While ion desolvation is less efficient at RP, the transport of ions is more efficient, leading to intensity enhancement and an increased number of oligomer ions. When deciding between AP and RP IR-MALDI, a trade-off between intensity and resolving power has to be considered. Here, the low field mobility of peptide ions was first measured and compared with reference values from ESI-IM spectrometry (at AP) as well as collision cross sections obtained from molecular dynamics simulations. The second application was the determination of the reduced mobility of various substituted ammonium ions as a function of E/N in nitrogen. The mobility is constant up to a threshold at high E/N. Beyond this threshold, mobility increases were observed. This behavior can be explained by the loss of hydrated water molecules.
Stand-alone quantum dot-based single-photon source operating at telecommunication wavelengths
(2020)
Mixed-integer programming (MIP) problem is arguably among the hardest classes of optimization problems. This paper describes how we solved 21 previously unsolved MIP instances from the MIPLIB benchmark sets. To achieve these results we used an enhanced version of ParaSCIP, setting a new record for the largest scale MIP computation: up to 80,000 cores in parallel on the Titan supercomputer. In this paper, we describe the basic parallelization mechanism of ParaSCIP, improvements of the dynamic load balancing and novel techniques to exploit the power of parallelization for MIP solving. We give a detailed overview of computing times and statistics for solving open MIPLIB instances.
Cycle inequalities play an important role in the polyhedral study of the periodic timetabling problem in public transport. We give the first pseudo-polynomial time separation algorithm for cycle inequalities, and we contribute a rigorous proof for the pseudo-polynomial time separability of the change-cycle inequalities. Moreover, we provide several NP-completeness results, indicating that pseudo-polynomial time is best possible. The efficiency of these cutting planes is demonstrated on real-world instances of the periodic timetabling problem.
We present a software-assisted workflow for the alignment and matching of filamentous structures across a 3D stack of serial images. This is achieved by combining automatic methods, visual validation, and interactive correction. After an initial alignment, the user can continuously improve the result by interactively correcting landmarks or matches of filaments. Supported by a visual quality assessment of regions that have been already inspected, this allows a trade-off between quality and manual labor. The software tool was developed to investigate cell division by quantitative 3D analysis of microtubules (MTs) in both mitotic and meiotic spindles. For this, each spindle is cut into a series of semi-thick physical sections, of which electron tomograms are acquired. The serial tomograms are then stitched and non-rigidly aligned to allow tracing and connecting of MTs across tomogram boundaries. In practice, automatic stitching alone provides only an incomplete solution, because large physical distortions and a low signal-to-noise ratio often cause experimental difficulties. To derive 3D models of spindles despite the problems related to sample preparation and subsequent data collection, semi-automatic validation and correction is required to remove stitching mistakes. However, due to the large number of MTs in spindles (up to 30k) and their resulting dense spatial arrangement, a naive inspection of each MT is too time consuming. Furthermore, an interactive visualization of the full image stack is hampered by the size of the data (up to 100 GB). Here, we present a specialized, interactive, semi-automatic solution that considers all requirements for large-scale stitching of filamentous structures in serial-section image stacks. The key to our solution is a careful design of the visualization and interaction tools for each processing step to guarantee real-time response, and an optimized workflow that efficiently guides the user through datasets.
Constrained second-order convex optimization algorithms are the method of choice when a high accuracy solution to a problem is needed, due to their local quadratic convergence. These algorithms require the solution of a constrained quadratic subproblem at every iteration. We present the \emph{Second-Order Conditional Gradient Sliding} (SOCGS) algorithm, which uses a projection-free algorithm to solve the constrained quadratic subproblems inexactly. When the feasible region is a polytope the algorithm converges quadratically in primal gap after a finite number of linearly convergent iterations. Once in the quadratic regime the SOCGS algorithm requires O(log(log1/ε)) first-order and Hessian oracle calls and O(log(1/ε)log(log1/ε)) linear minimization oracle calls to achieve an ε-optimal solution. This algorithm is useful when the feasible region can only be accessed efficiently through a linear optimization oracle, and computing first-order information of the function, although possible, is costly.
This thesis presents a method for interpolating data using a neural network. The data is sparse and perturbed and is used as training data for a small neural network. For severely perturbed data, the network does not manage to find a smooth interpolation. But as the data resembles the solution to the one-dimensional and time-independent heat equation, the weak form of this PDE and subsequently its functional can be written down. If the functional is minimized, a solution to the weak form of the heat equation is found. The functional is now added to the traditional loss function of a neural network, the mean squared error between the network prediction and the given data, in order to smooth out fluctuations and interpolate between distanced grid points. This way, the network minimizes both the mean squared error and the functional, resulting in a smoother curve that can be used to predict u(x) for any grid point x.
Friction in liquids arises from conservative forces between molecules and atoms. Although the hydrodynamics at the nanoscale is subject of intense research and despite the enormous interest in the non-Markovian dynamics of single molecules and solutes, the onset of friction from the atomistic scale so far could not be demonstrated. Here, we fill this gap based on frequency-resolved friction data from high-precision simulations of three prototypical liquids, including water. Combining with theory, we show that friction in liquids emerges abruptly at a characteristic frequency, beyond which viscous liquids appear as non-dissipative, elastic solids. Concomitantly, the molecules experience Brownian forces that display persistent correlations. A critical test of the generalised Stokes–Einstein relation, mapping the friction of single molecules to the visco-elastic response of the macroscopic sample, disproves the relation for Newtonian fluids, but substantiates it exemplarily for water and a moderately supercooled liquid. The employed approach is suitable to yield insights into vitrification mechanisms and the intriguing mechanical properties of soft materials.
Quantitative PA tomography of high resolution 3-D images: experimental validation in tissue phantoms
(2020)
Quantitative photoacoustic tomography aims recover the spatial distribution of absolute chromophore concentrations and their ratios from deep tissue, high-resolution images. In this study, a model-based inversion scheme based on a Monte-Carlo light transport model is experimentally validated on 3-D multispectral images of a tissue phantom acquired using an all-optical scanner with a planar detection geometry. A calibrated absorber allowed scaling of the measured data during the inversion, while an acoustic correction method was employed to compensate the effects of limited view detection. Chromophore- and fluence-dependent step sizes and Adam optimization were implemented to achieve rapid convergence. High resolution 3-D maps of absolute concentrations and their ratios were recovered with high accuracy. Potential applications of this method include quantitative functional and molecular photoacoustic tomography of deep tissue in preclinical and clinical studies.
Though gait asymmetry is used as a metric of functional recovery in clinical rehabilitation, there is no consensus on an ideal method for its evaluation. Various methods have been proposed but are limited in scope, as they can often use only positive signals or discrete values extracted from time-scale data as input. By defining five symmetry axioms, a framework for benchmarking existing methods was established and a new method was described here for the first time: the weighted universal symmetry index (wUSI), which overcomes limitations of other methods. Both existing methods and the wUSI were mathematically compared to each other and in respect to their ability to fulfill the proposed symmetry axioms. Eligible methods that fulfilled these axioms were then applied using both discrete and continuous approaches to ground reaction force (GRF) data collected from healthy gait, both with and without artificially induced asymmetry using a single instrumented elbow crutch. The wUSI with a continuous approach was the only symmetry method capable of determining GRF asymmetries in different walking conditions in all three planes of motion. When used with a continuous approach, the wUSI method was able to detect asymmetries while avoiding artificial inflation, a common problem reported in other methods. In conclusion, the wUSI is proposed as a universal method to quantify three-dimensional GRF asymmetries, which may also be expanded to other biomechanical signals.
The complexity in large-scale optimization can lie in both handling the objective function and handling the constraint set. In this respect, stochastic Frank-Wolfe algorithms occupy a unique position as they alleviate both computational burdens, by querying only approximate first-order information from the objective and by maintaining feasibility of the iterates without using projections. In this paper, we improve the quality of their first-order information by blending in adaptive gradients. We derive convergence rates and demonstrate the computational advantage of our method over the state-of-the-art stochastic Frank-Wolfe algorithms on both convex and nonconvex objectives. The experiments further show that our method can improve the performance of adaptive gradient algorithms for constrained optimization.
Demand Side Management (DSM) is usually considered as a process of energy consumption shifting from peak hours to off-peak times. DSM does not always reduce total energy consumption, but it helps to meet energy demand and supply. For example, it balances variable generation from renewables (such as solar and wind) when energy demand differs from renewable generation.
Price-and-verify: a new algorithm for recursive circle packing using Dantzig–Wolfe decomposition
(2020)
Packing rings into a minimum number of rectangles is an optimization problem which appears naturally in the logistics operations of the tube industry. It encompasses two major difficulties, namely the positioning of rings in rectangles and the recursive packing of rings into other rings. This problem is known as the Recursive Circle Packing Problem (RCPP). We present the first dedicated method for solving RCPP that provides strong dual bounds based on an exact Dantzig–Wolfe reformulation of a nonconvex mixed-integer nonlinear programming formulation. The key idea of this reformulation is to break symmetry on each recursion level by enumerating one-level packings, i.e., packings of circles into other circles, and by dynamically generating packings of circles into rectangles. We use column generation techniques to design a “price-and-verify” algorithm that solves this reformulation to global optimality. Extensive computational experiments on a large test set show that our method not only computes tight dual bounds, but often produces primal solutions better than those computed by heuristics from the literature.
Recently, Intel released the oneAPI programming environment. With Data Parallel C++ (DPC++), oneAPI enables codes to target multiple hardware architectures like multi-core CPUs, GPUs, and even FPGAs or other hardware using a single source. For legacy codes that were written for Nvidia GPUs, a compatibility tool is provided which facilitates the transition to the SYCL-based DPC++ programming language. This paper presents early experiences when using both the compatibility tool and oneAPI as well the employed extension to the SYCL programming standard for the tsunami simulation code easyWave. A performance study compares the original code running on Xeon processors using OpenMP as well as CUDA with the performance of the DPC++ counter part on multicore CPUs as well as integrated GPUs.
We present an extension of Taylor's Theorem for the piecewise polynomial expansion of non-smooth evaluation procedures involving absolute value operations. Evaluation procedures are computer programs of mathematical functions in closed form expression and allow a different treatment of smooth operations or calls to the absolute value function. The well known classical Theorem of Taylor defines polynomial approximations of sufficiently smooth functions and is widely used for the derivation and analysis of numerical integrators for systems of ordinary differential- or differential-algebraic equations, for the construction of solvers for continuous non-linear optimization of finite dimensional objective functions and for root solving of non-linear systems of equations. The long term goal is the stabilization and acceleration of already known methods and the derivation of new methods by incorporating piecewise polynomial Taylor expansions. The herein provided proof of the higher order approximation quality of the new generalized expansions is constructive and allows efficiently designed algorithms for the execution and computation of the piecewise polynomial expansions. As a demonstration towards the ultimate goal we will derive a prototype of a {\$}{\$}k{\$}{\$}k-step method on the basis of polynomial interpolation and the proposed generalized expansions.
PauSat
(2020)
Public transportation networks are typically operated with a periodic timetable. The Periodic Event Scheduling Problem (PESP) is the standard mathematical modelling tool for periodic timetabling. Since PESP can be solved in linear time on trees, it is a natural question to ask whether there are polynomial-time algorithms for input networks of bounded treewidth. We show that deciding the feasibility of a PESP instance is NP-hard even when the treewidth is 2, the branchwidth is 2, or the carvingwidth is 3. Analogous results hold for the optimization of reduced PESP instances, where the feasibility problem is trivial. To complete the picture, we present two pseudo-polynomial-time dynamic programming algorithms solving PESP on input networks with bounded tree- or branchwidth. We further analyze the parameterized complexity of PESP with bounded cyclomatic number, diameter, or vertex cover number. For event-activity networks with a special -- but standard -- structure, we give explicit and sharp bounds on the branchwidth in terms of the maximum degree and the carvingwidth of an underlying line network. Finally, we investigate several parameters on the smallest instance of the benchmarking library PESPlib.
This is the documentation on current results of a research project jointly conducted by Stiftung Deutsche Kinemathek (SDK) and Zuse Institute Berlin (ZIB). In this project, we are working on a practical yet sustainable archiving solution for audiovisual material.
In the course of the project two major obstacles were identified: 1) Metadata is collected according to standards established in the community but lacking a prescribed serialisation format. 2) Storage size of audiovisual material and time scales of production processes make it often impractical to defer submission for archival storage until all components have arrived and can be processed in one go.
It is well understood that Bayesian decision theory and average case analysis are essentially identical. However, if one is interested in performing uncertainty quantification for a numerical task, it can be argued that the decision-theoretic framework is neither appropriate nor sufficient. To this end, we consider an alternative optimality criterion from Bayesian experimental design and study its implied optimal information in the numerical context. This information is demonstrated to differ, in general, from the information that would be used in an average-case-optimal numerical method. The explicit connection to Bayesian experimental design suggests several distinct regimes in which optimal probabilistic numerical methods can be developed.
One of the fundamental steps in the optimization of public transport is line planning. It involves determining lines and assigning frequencies of service such that costs are minimized while also maximizing passenger comfort and satisfying travel demands. We formulate the problem as a mixed integer linear program that considers all circuit-like lines in a graph and allows free passenger routing. Traveler and operator costs are included in a linear scalarization in the objective. We apply said programming problem to the Parametric City, which is a graph model introduced by Fielbaum, Jara-Díaz and Gschwender that exibly represents different cities. In his dissertation, Fielbaum solved the line planning problem for various parameter choices in the Parametric City. In a first step, we therefore review his results and make comparative computations. Unlike Fielbaum we arrive at the conclusion that the optimal line plan for this model indeed depends on the demand. Consequently, we analyze the line planning problem in-depth: We find equivalent, but easier to compute formulations and provide a lower bound by LP-relaxation, which we show to be equivalent to a multi-commodity flow problem. Further, we examine what impact symmetry has on the solutions. Supported both by computational results as well as by theoretical analysis, we reach the conclusion that symmetric line plans are optimal or near-optimal in the Parametric City. Restricting the model to symmetric line plans allows for a \kappa-factor approximation algorithm for the line planning problem in the Parametric City.
Recently, Kronqvist et al. (J Global Optim 64(2):249–272, 2016) rediscovered the supporting hyperplane algorithm of Veinott (Oper Res 15(1):147–152, 1967) and demonstrated its computational benefits for solving convex mixed integer nonlinear programs. In this paper we derive the algorithm from a geometric point of view. This enables us to show that the supporting hyperplane algorithm is equivalent to Kelley’s cutting plane algorithm (J Soc Ind Appl Math 8(4):703–712, 1960) applied to a particular reformulation of the problem. As a result, we extend the applicability of the supporting hyperplane algorithm to convex problems represented by a class of general, not necessarily convex nor differentiable, functions.
Tom Streubel has observed that for functions in abs-normal form, generalized Taylor expansions of arbitrary order $\bar d-1$ can be generated by algorithmic piecewise differentiation. Abs-normal form means that the real or vector valued function is defined by an evaluation procedure that involves the absolute value function $|...|$ apart from arithmetic operations and $\bar d$ times continuously differentiable univariate intrinsic functions. The additive terms in Streubel's expansion are abs-polynomial, i.e. involve neither divisions nor intrinsics. When and where no absolute values occur, Moore's recurrences can be used to propagate univariate Taylor polynomials through the evaluation procedure with a computational effort of $\mathcal O({\bar d}^2)$, provided all univariate intrinsics are defined as solutions of linear ODEs. This regularity assumption holds for all standard intrinsics, but for irregular elementaries one has to resort to Faa di Bruno's formula, which has exponential complexity in $\bar d$. As already conjectured we show that the Moore recurrences can be adapted for regular intrinsics to the abs-normal case. Finally, we observe that where the intrinsics are real analytic the expansions can be extended to infinite series that converge absolutely on spherical domains.
Tom Streubel has observed that for functions in abs-normal form, generalized Taylor expansions of arbitrary order $\bar d-1$ can be generated by algorithmic piecewise differentiation. Abs-normal form means that the real or vector valued function is defined by an evaluation procedure that involves the absolute value function $|...|$ apart from arithmetic operations and $\bar d$ times continuously differentiable univariate intrinsic functions. The additive terms in Streubel's expansion are abs-polynomial, i.e. involve neither divisions nor intrinsics. When and where no absolute values occur, Moore's recurrences can be used to propagate univariate Taylor polynomials through the evaluation procedure with a computational effort of $\mathcal O({\bar d}^2)$, provided all univariate intrinsics are defined as solutions of linear ODEs. This regularity assumption holds for all standard intrinsics, but for irregular elementaries one has to resort to Faa di Bruno's formula, which has exponential complexity in $\bar d$. As already conjectured we show that the Moore recurrences can be adapted for regular intrinsics to the abs-normal case. Finally, we observe that where the intrinsics are real analytic the expansions can be extended to infinite series that converge absolutely on spherical domains.
The most important ingredient for solving mixed-integer nonlinear programs (MINLPs) to global epsilon-optimality with spatial branch and bound is a tight, computationally tractable relaxation. Due to both theoretical and practical considerations, relaxations of MINLPs are usually required to be convex. Nonetheless, current optimization solver can often successfully handle a moderate presence of nonconvexities, which opens the door for the use of potentially tighter nonconvex relaxations. In this work, we exploit this fact and make use of a nonconvex relaxation obtained via aggregation of constraints: a surrogate relaxation. These relaxations were actively studied for linear integer programs in the 70s and 80s, but they have been scarcely considered since. We revisit these relaxations in an MINLP setting and show the computational benefits and challenges they can have. Additionally, we study a generalization of such relaxation that allows for multiple aggregations simultaneously and present the first algorithm that is capable of computing the best set of aggregations. We propose a multitude of computational enhancements for improving its practical performance and evaluate the algorithm’s ability to generate strong dual bounds through extensive computational experiments.
Intrinsic and parametric regression models are of high interest for the statistical analysis of manifold-valued data such as images and shapes. The standard linear ansatz has been generalized to geodesic regression on manifolds making it possible to analyze dependencies of random variables that spread along generalized straight lines. Nevertheless, in some scenarios, the evolution of the data cannot be modeled adequately by a geodesic.
We present a framework for nonlinear regression on manifolds by considering Riemannian splines, whose segments are Bézier curves, as trajectories.
Unlike variational formulations that require time-discretization, we take a constructive approach that provides efficient and exact evaluation by virtue of the generalized de Casteljau algorithm.
We validate our method in experiments on the reconstruction of periodic motion of the mitral valve as well as the analysis of femoral shape changes during the course of osteoarthritis, endorsing Bézier spline regression as an effective and flexible tool for manifold-valued regression.
The determination of non-gravitational forces based on precise astrometry is one of the main tools to establish the cometary character of interstellar and solar-system objects. The Rosetta mission to comet 67P/C-G provided the unique opportunity to benchmark Earth-bound estimates of non-gravitational forces with in-situ data. We determine the accuracy of the standard Marsden and Sekanina parametrization of non-gravitational forces with respect to the observed dynamics. Additionally we analyse the rotation-axis changes (orientation and period) of 67P/C-G. This comparison provides a reference case for future cometary missions and sublimation models for non-gravitational forces.
Network and Storage
(2020)
Natural gas is considered by many to be the most important energy source for the future. The objectives of energy commodities strategic problems can be mainly related to natural gas and deal with the definition of the “optimal” gas pipelines design which includes a number of related sub problems such as: Gas stations (compression) location and Gas storage locations, as well as compression station design and optimal operation.
Growing demand, distributed generation, such as renewable energy sources (RES), and the increasing role of storage systems to mitigate the volatility of RES on a medium voltage level, push existing distribution grids to their limits. Therefore, necessary network expansion needs to be evaluated to guarantee a safe and reliable electricity supply in the future taking these challenges into account. This problem is formulated as an optimal power flow (OPF) problem which combines network expansion, volatile generation and storage systems, minimizing network expansion and generation costs. As storage systems introduce a temporal coupling into the system, a multiperiod OPF problem is needed and analysed in this thesis. To reduce complexity, the network expansion problem is represented in a continuous nonlinear programming formulation by using fundamental properties of electrical engeneering. This formulation is validated succesfully against a common mixed integer programming approach on a 30 and 57 bus network with respect to solution and computing time. As the OPF problem is, in general, a nonconvex, nonlinear problem and, thus, hard to solve, convex relaxations of the power flow equations have gained increasing interest. Sufficient conditions are represented which guarantee exactness of a second-order cone (SOC) relaxation of an operational OPF in radial networks. In this thesis, these conditions are enhanced for the network expansion planning problem. Additionally, nonconvexities introduced by the choice of network expansion variables are relaxed by using McCormick envelopes. These relaxations are then applied on the multiperiod OPF and compared to the original problem on a 30 and a 57 bus network. In particular, the computational time is decreased by an order up to 10^2 by the SOC relaxation while it provides either an exact solution or a sufficient lower bound on the original problem. Finally, a sensitivity study is performed on weights of network expansion costs showing strong dependency of both the solution of performed expansion and solution time on the chosen weights.
Urban transportation systems are subject to a high level of variation and fluctuation in demand over the day. When this variation and fluctuation are observed in both time and space, it is crucial to develop line plans that are responsive to demand. A multi-period line planning approach that considers a changing demand during the planning horizon is proposed. If such systems are also subject to limitations of resources, a dynamic transfer of resources from one line to another throughout the planning horizon should also be considered. A mathematical modelling framework is developed to solve the line planning problem with a cost-oriented approach considering transfer of resources during a finite length planning horizon of multiple periods. We use real-life public transportation network data for our computational results. We analyze whether or not multi-period solutions outperform single period solutions in terms of feasibility and relevant costs. The importance of demand variation on multi-period solutions is investigated. We evaluate the impact of resource transfer constraints on the effectiveness of solutions. We also study the effect of period lengths along with the problem parameters that are significant for and sensitive to the optimality of solutions.
In order to better understand the relationship between shape of the nasal cavity and to find objective classification for breathing obstruction, a population of 25 cases of healthy nasal cavity and 27 cases with diagnosed nasal airway obstruction (NAO) was examined for correlations between morphological, clinical and CFD parameters. For this purpose a workflow was implemented in Tcl to perform automatic measurements of morphological parameters of nasal cavity surfaces in Amira, which has as output a table with all estimated values. Furthermore, the statistical analysis was designed using Python to find the most probable subset of parameters that are predictors of nasal cavity pathology and consisted of correlation analysis, the selection of the best possible subset of parameters that could be used as predictors of clinically stated pathology of the nasal cavity by a logistic regression classifier. As a result, 10 most promising parameters were identified: mean distance between the two isthmuses, left isthmus contour, area ratio between the two isthmuses, left isthmus height, height ratio between the two isthmuses, left isthmus width, right isthmus width, right isthmus hydraulic diameter, mean distance of septal curvature between the septum enclosing walls of the nasal cavity, velocities volume average by expiration. As it turns out, most parameters refer to the isthmus region. This was to be expected since this region
plays an important role in the airflow system of the nasal cavity.
This master thesis investigates the use and behaviour of a mixed finite element formulation for the simulation of garments.
The garment is modelled as an isotropic shell and is related to its mid-surface by energetic degeneration. Based on this, an energy functional is constructed, which contains the deformation and the mid-surface vector as degree of freedom. It is then shown why this problem does not correspond to a saddle point problem, but to a non-convex energy minimization.
The implementation of the energy minimization takes place with the ZIB-internal FE framework Kaskade7.4, whereby a geometric linear and different geometric non-linear problems are examined, whereby for a selected, non-linear example a comparison is made with an existing implementation on basis of Morley elements.
The further evaluations include the analysis of the quantitative and qualitative results, the used solution method, the behaviour of the system energy as well as the used CPU time.
混合整数計画法 (Mixed Integer Programming: MIP) は,MIP を解くソフトウェアである MIP ソルバが大規模な現実問題を解けるようになったこともあり,現実問題を解く有用な OR の手法として広く知られるようになった.しかしながら,MIP ソルバの開発に欠かせないベンチマーク・データセットおよび性能測定方法についてはそれほど広く知られているとは言い難い.ベンチマーク・データセットは注意を払って作成しないと,多くのバイアスがかかってしまう.それらのバイアスを可能な限りのぞき,真に有用なベンチマーク・テストの結果を得るためには複数の人数で多大な労力を割く必要がある.本稿では,そのような MIP ソルバ開発の背景として重要な役割を果たしてきた MIPLIB と Hans Mittelmann’s benchmarks について解説する.また,本稿において Hans Mittelmann’s benchmarks は,BENCHMARKS FOR OPTIMIZATION SOFTWAREのページ (http://plato.asu.edu/bench.html) に示されているベンチマークである.
In this article we introduce a Minimum Cycle Partition Problem with Length Requirements (CPLR). This generalization of the Travelling Salesman Problem (TSP) originates from routing Unmanned Aerial Vehicles (UAVs). Apart from nonnegative edge weights, CPLR has an individual critical weight value associated with each vertex. A cycle partition, i.e., a vertex disjoint cycle cover, is regarded as a feasible solution if the length of each cycle, which is the sum of the weights of its edges, is not greater than the critical weight of each of its vertices. The goal is to find a feasible partition, which minimizes the number of cycles. In this article, a heuristic algorithm is presented together with a Mixed Integer Programming (MIP) formulation of CPLR. We furthermore introduce a conflict graph, whose cliques yield valid constraints for the MIP model. Finally, we report on computational experiments conducted on TSPLIB-based test instances.