Refine
Year of publication
- 2020 (224) (remove)
Document Type
- Article (122)
- In Proceedings (35)
- ZIB-Report (30)
- Master's Thesis (9)
- Book chapter (7)
- Book (4)
- Doctoral Thesis (4)
- Other (4)
- In Collection (3)
- Bachelor's Thesis (2)
- Research data (2)
- Proceedings (1)
- Software (1)
Keywords
Institute
- Modeling and Simulation of Complex Processes (71)
- Numerical Mathematics (67)
- Visual and Data-centric Computing (37)
- Visual Data Analysis (36)
- Applied Algorithmic Intelligence Methods (33)
- Computational Nano Optics (32)
- Mathematical Optimization (32)
- Mathematical Algorithmic Intelligence (30)
- Distributed Algorithms and Supercomputing (27)
- AI in Society, Science, and Technology (19)
混合整数計画法 (Mixed Integer Programming: MIP) は,MIP を解くソフトウェアである MIP ソルバが大規模な現実問題を解けるようになったこともあり,現実問題を解く有用な OR の手法として広く知られるようになった.しかしながら,MIP ソルバの開発に欠かせないベンチマーク・データセットおよび性能測定方法についてはそれほど広く知られているとは言い難い.ベンチマーク・データセットは注意を払って作成しないと,多くのバイアスがかかってしまう.それらのバイアスを可能な限りのぞき,真に有用なベンチマーク・テストの結果を得るためには複数の人数で多大な労力を割く必要がある.本稿では,そのような MIP ソルバ開発の背景として重要な役割を果たしてきた MIPLIB と Hans Mittelmann’s benchmarks について解説する.また,本稿において Hans Mittelmann’s benchmarks は,BENCHMARKS FOR OPTIMIZATION SOFTWAREのページ (http://plato.asu.edu/bench.html) に示されているベンチマークである.
The temporally and spatially resolved tracking of lithium intercalation and electrode degradation processes are crucial for detecting and understanding performance losses during the operation of lithium-batteries. Here, high-throughput X-ray computed tomography has enabled the identification of mechanical degradation processes in a commercial Li/MnO2 primary battery and the indirect tracking of lithium diffusion; furthermore, complementary neutron computed tomography has identified the direct lithium diffusion process and the electrode wetting by the electrolyte. Virtual electrode unrolling techniques provide a deeper view inside the electrode layers and are used to detect minor fluctuations which are difficult to observe using conventional three dimensional rendering tools. Moreover, the ‘unrolling’ provides a platform for correlating multi-modal image data which is expected to find wider application in battery science and engineering to study diverse effects e.g. electrode degradation or lithium diffusion blocking during battery cycling.
The German high-pressure natural gas transport network consists of thousands of interconnected elements spread over more than 120,000 km of pipelines built during the last 100 years. During the last decade, we have spent many person-years to extract consistent data out of the available sources, both public and private. Based on two case studies, we present some of the challenges we encountered.
Preparing consistent, high-quality data is surprisingly hard, and the effort necessary can hardly be overestimated. Thus, it is particularly important to decide which strategy regarding data curation to adopt. Which precision of the data is necessary? When is it more efficient to work with data that is just sufficiently correct on average?
In the case studies we describe our experiences and the strategies we adopted to deal with the obstacles and to minimize future effort.
Finally, we would like to emphasize that well-compiled data sets, publicly available for research purposes, provide the grounds for building innovative algorithmic solutions to the challenges of the future.
The mixed-integer linear programming (MILP) method has been applied widely to optimal design of energy supply systems. A hierarchical MILP method has been proposed to solve such optimal design problems effi- ciently. As one of the strategies to enhance the computation efficiency furthermore, a method of reducing model by time aggregation has been proposed to search design candidates accurately and efficiently in the relaxed optimal design problem at the upper level. In this paper, the hierarchical MILP method and model reduction by time aggregation are applied to the multiobjective optimal design. In applying the model reduc- tion, the methods of clustering periods by the order of time series, based on an operational strategy, and by the k-medoids method are applied. As a case study, the multiobjective optimal design of a gas turbine cogeneration system with a practical configuration is investigated by adopting the annual total cost and pri- mary energy consumption as the objective functions to be minimized simultaneously, and the clustering methods are compared with one another in terms of the computation efficiency. It turns out that the model reduction by any clustering method is effective to enhance the computation efficiency when importance is given to minimizing the first objective function. It also turns out that the model reduction only by the k- medoids method is effective very limitedly when importance is given to minimizing the second objective function.
In designing energy supply systems, designers should heighten the robustness in performance criteria against the uncertainty in energy demands. In this paper, a robust optimal design method using a hierarchi- cal mixed-integer linear programming (MILP) method is proposed to maximize the robustness of energy sup- ply systems under uncertain energy demands based on a mixed-integer linear model. A robust optimal design problem is formulated as a three-level min-max-min MILP one by expressing uncertain energy demands by intervals, evaluating the robustness in a performance criterion based on the minimax regret cri- terion, and considering relationships among integer design variables, uncertain energy demands, and inte- ger and continuous operation variables. This problem is solved by evaluating upper and lower bounds for the minimum of the maximum regret of the performance criterion repeatedly outside, and evaluating lower and upper bounds for the maximum regret repeatedly inside. Since these different types of optimization problems are difficult to solve even using commercial MILP solvers, they are solved by applying a hierarchi- cal MILP method developed for ordinary optimal design problems with its modifications. In a case study, the proposed approach is applied to the robust optimal design of a cogeneration system. Through the study, its validity and effectiveness are ascertained, and some features of the obtained robust designs are clarified.
This thesis presents a method for interpolating data using a neural network. The data is sparse and perturbed and is used as training data for a small neural network. For severely perturbed data, the network does not manage to find a smooth interpolation. But as the data resembles the solution to the one-dimensional and time-independent heat equation, the weak form of this PDE and subsequently its functional can be written down. If the functional is minimized, a solution to the weak form of the heat equation is found. The functional is now added to the traditional loss function of a neural network, the mean squared error between the network prediction and the given data, in order to smooth out fluctuations and interpolate between distanced grid points. This way, the network minimizes both the mean squared error and the functional, resulting in a smoother curve that can be used to predict u(x) for any grid point x.
Two essential ingredients of modern mixed-integer programming (MIP) solvers are diving heuristics that simulate a partial depth-first search in a branch-and-bound search tree and conflict analysis of infeasible subproblems to learn valid constraints. So far, these techniques have mostly been studied independently: primal heuristics under the aspect of finding high-quality feasible solutions early during the solving process and conflict analysis for fathoming nodes of the search tree and improving the dual bound. Here, we combine both concepts in two different ways. First, we develop a diving heuristic that targets the generation of valid conflict constraints from the Farkas dual. We show that in the primal this is equivalent to the optimistic strategy of diving towards the best bound with respect to the objective function. Secondly, we use information derived from conflict analysis to enhance the search of a diving heuristic akin to classical coefficient diving. The computational performance of both methods is evaluated using an implementation in the source-open MIP solver SCIP. Experiments are carried out on publicly available test sets including Miplib 2010 and Cor@l.
Conflict learning plays an important role in solving mixed integer programs (MIPs) and is implemented in most major MIP solvers. A major step for MIP conflict learning is to aggregate the LP relaxation of an infeasible subproblem to a single globally valid constraint, the dual proof, that proves infeasibility within the local bounds. Among others, one way of learning is to add these constraints to the problem formulation for the remainder of the search.
We suggest to not restrict this procedure to infeasible subproblems, but to also use global proof constraints from subproblems that are not (yet) infeasible, but can be expected to be pruned soon. As a special case, we also consider learning from integer feasible LP solutions. First experiments of this conflict-free learning strategy show promising results on the MIPLIB2017 benchmark set.
Conflicting hypotheses about the relationships among the major lineages of aculeate Hymenoptera clearly show the necessity of detailed comparative morphological studies. Using micro-computed tomography and 3D reconstructions, the skeletal musculature of the meso- and metathorax and the first and second abdominal segment in Apoidea are described. Females of Sceliphron destillatorium, Sphex (Fernaldina) lucae (both Sphecidae), and Ampulex compressa (Ampulicidae) were examined. The morphological terminology provided by the Hymenoptera Anatomy Ontology is used. Up to 42 muscles were found. The three species differ in certain numerical and structural aspects. Ampulicidae differs significantly from Sphecidae in the metathorax and the anterior abdomen. The metapleural apodeme and paracoxal ridge are weakly developed in Ampulicidae, which affect some muscular structures. Furthermore, the muscles that insert on the coxae and trochanters are broader and longer in Ampulicidae. A conspicuous characteristic of Sphecidae is the absence of the metaphragma. Overall, we identified four hitherto unrecognized muscles. Our work suggests additional investigations on structures discussed in this paper.
Understanding the pathophysiological processes of cartilage degradation requires adequate model systems to develop therapeutic strategies towards osteoarthritis (OA). Although different in vitro or in vivo models have been described, further comprehensive approaches are needed to study specific disease aspects. This study aimed to combine in vitro and in silico modeling based on a tissue-engineering approach using mesenchymal condensation to mimic cytokine-induced cellular and matrix-related changes during cartilage degradation. Thus, scaffold-free cartilage-like constructs (SFCCs) were produced based on self-organization of mesenchymal stromal cells (mesenchymal condensation) and i) characterized regarding their cellular and matrix composition or secondly ii) treated with interleukin-1β (IL-1β) and tumor necrosis factor α (TNFα) for 3 weeks to simulate OA-related matrix degradation. In addition, an existing mathematical model based on partial differential equations was optimized and transferred to the underlying settings to simulate distribution of IL-1β, type II collagen degradation and cell number reduction. By combining in vitro and in silico methods, we aim to develop a valid, efficient alternative approach to examine and predict disease progression and effects of new therapeutics.
We present a transductive learning approach for morphometric osteophyte grading based on geometric deep learning. We formulate the grading task as semi-supervised node classification problem on a graph embedded in shape space. To account for the high-dimensionality and non-Euclidean structure of shape space we employ a combination of an intrinsic dimension reduction together with a graph convolutional neural network. We demonstrate the performance of our derived classifier in comparisons to an alternative extrinsic approach.
A new ion mobility (IM) spectrometer, enabling mobility measurements in the pressure range between 5 and 500 mbar and in the reduced field strength range E/N of 5–90 Td, was developed and characterized. Reduced mobility (K0) values were studied under low E/N (constant value) as well as high E/N (deviation from low field K0) for a series of molecular ions in nitrogen. Infrared matrix-assisted laser desorption ionization (IR-MALDI) was used in two configurations: a source working at atmospheric pressure (AP) and, for the first time, an IR-MALDI source working with a liquid (aqueous) matrix at sub-ambient/reduced pressure (RP). The influence of RP on IR-MALDI was examined and new insights into the dispersion process were gained. This enabled the optimization of the IM spectrometer for best analytical performance. While ion desolvation is less efficient at RP, the transport of ions is more efficient, leading to intensity enhancement and an increased number of oligomer ions. When deciding between AP and RP IR-MALDI, a trade-off between intensity and resolving power has to be considered. Here, the low field mobility of peptide ions was first measured and compared with reference values from ESI-IM spectrometry (at AP) as well as collision cross sections obtained from molecular dynamics simulations. The second application was the determination of the reduced mobility of various substituted ammonium ions as a function of E/N in nitrogen. The mobility is constant up to a threshold at high E/N. Beyond this threshold, mobility increases were observed. This behavior can be explained by the loss of hydrated water molecules.
Minimising levelised cost of electricity of bifacial solar panel arrays using Bayesian optimisation
(2020)
A Polyhedral Study of Event-Based Models for the Resource-Constrained Project Scheduling Problem
(2020)
We consider event-based Mixed-Integer Programming (MIP) models for the Resource-Constrained Project Scheduling Problem (RCPSP) that represent an alternative to the common time-indexed model (DDT) of Pritsker et al. (1969) for the case where the underlying time horizon is large or job processing times are subject to huge variations. In contrast to the time-indexed model, the size of event-based models does not depend on the time horizon. For two event-based formulations OOE and SEE of Koné et al. (2011) we present new valid inequalities that dominate the original formulation. Additionally, we introduce a new event-based model: the Interval Event-Based Model (IEE). We deduce linear transformations between all three models that yield the strict domination order IEE > SEE > OOE for their linear programming (LP) relaxations, meaning that IEE has the strongest linear relaxation among the event-based models. We further show that the popular DDT formulation can be retrieved from IEE by certain polyhedral operations, thus giving a unifying view on a complete branch of MIP formulations for the RCPSP. In addition, we analyze the computational performance of all presented models on test instances of the PSPLIB (Kolisch and Sprecher 1997).
To attain the highest performance of energy supply systems, it is necessary to determine design specifications optimally in consideration of operational strategies corresponding to seasonal and hourly variations in energy demands. Mixed-integer linear programming (MILP) methods have been applied widely to such optimal design problems. A hierarchical MILP method has been proposed to solve the problems very efficiently. In addition, by utilizing features of the hierarchical MILP method, a method of reducing model by clustering periods based on the optimal operational strategies of equipment has been proposed to search design solution candidates efficiently in the relaxed optimal design problem at the upper level. In this paper, these methods are applied to the multiobjective optimal design of a cogeneration system by considering the annual total cost and primary energy consumption as the objective functions to be minimized. Through a case study, it turns out that the model reduction by the operation-based time-period clustering is effective in terms of the computation efficiency when importance is given to the first objective function, while it is not when importance is given to the second objective function.
To attain the highest performance of energy supply systems, it is necessary to determine design specifications optimally in consideration of operational strategies corresponding to seasonal and hourly variations in energy demands. A hierarchical mixed-integer linear programming method has been proposed to solve such an optimal design problem efficiently. In this paper, a method of reducing model by clustering periods with the k-medoids method is applied to the relaxed optimal design problem at the upper level. Through a case study, it is clarified how the proposed method is effective to enhance the computation efficiency in a large scale optimal design problem.
Due to the increase in accessibility and robustness of sequencing technology, single cell RNA-seq (scRNA-seq) data has become abundant. The technology has made significant contributions to discovering novel phenotypes and heterogeneities of cells. Recently, there has been a push for using single-- or multiple scRNA-seq snapshots to infer the underlying gene regulatory networks (GRNs) steering the cells' biological functions. To date, this aspiration remains unrealised.
In this paper, we took a bottom-up approach and curated a stochastic two gene interaction model capturing the dynamics of a complete system of genes, mRNAs, and proteins. In the model, the regulation was placed upstream from the mRNA on the gene level. We then inferred the underlying regulatory interactions from only the observation of the mRNA population through~time.
We could detect signatures of the regulation by combining information of the mean, covariance, and the skewness of the mRNA counts through time. We also saw that reordering the observations using pseudo-time did not conserve the covariance and skewness of the true time course. The underlying GRN could be captured consistently when we fitted the moments up to degree three; however, this required a computationally expensive non-linear least squares minimisation solver.
There are still major numerical challenges to overcome for inference of GRNs from scRNA-seq data. These challenges entail finding informative summary statistics of the data which capture the critical regulatory information. Furthermore, the statistics have to evolve linearly or piece-wise linearly through time to achieve computational feasibility and scalability.
A Physarum-Inspired Algorithm for Minimum-Cost Relay Node Placement in Wireless Sensor Networks
(2020)
We present an extension of Taylor's Theorem for the piecewise polynomial expansion of non-smooth evaluation procedures involving absolute value operations. Evaluation procedures are computer programs of mathematical functions in closed form expression and allow a different treatment of smooth operations or calls to the absolute value function. The well known classical Theorem of Taylor defines polynomial approximations of sufficiently smooth functions and is widely used for the derivation and analysis of numerical integrators for systems of ordinary differential- or differential-algebraic equations, for the construction of solvers for continuous non-linear optimization of finite dimensional objective functions and for root solving of non-linear systems of equations. The long term goal is the stabilization and acceleration of already known methods and the derivation of new methods by incorporating piecewise polynomial Taylor expansions. The herein provided proof of the higher order approximation quality of the new generalized expansions is constructive and allows efficiently designed algorithms for the execution and computation of the piecewise polynomial expansions. As a demonstration towards the ultimate goal we will derive a prototype of a {\$}{\$}k{\$}{\$}k-step method on the basis of polynomial interpolation and the proposed generalized expansions.
Friction in liquids arises from conservative forces between molecules and atoms. Although the hydrodynamics at the nanoscale is subject of intense research and despite the enormous interest in the non-Markovian dynamics of single molecules and solutes, the onset of friction from the atomistic scale so far could not be demonstrated. Here, we fill this gap based on frequency-resolved friction data from high-precision simulations of three prototypical liquids, including water. Combining with theory, we show that friction in liquids emerges abruptly at a characteristic frequency, beyond which viscous liquids appear as non-dissipative, elastic solids. Concomitantly, the molecules experience Brownian forces that display persistent correlations. A critical test of the generalised Stokes–Einstein relation, mapping the friction of single molecules to the visco-elastic response of the macroscopic sample, disproves the relation for Newtonian fluids, but substantiates it exemplarily for water and a moderately supercooled liquid. The employed approach is suitable to yield insights into vitrification mechanisms and the intriguing mechanical properties of soft materials.
We investigate the directional locking effects that arise when a monolayer of paramagnetic colloidal particles is driven across a triangular lattice of magnetic bubbles. We use an external rotating magnetic field to generate a two-dimensional traveling wave ratchet forcing the transport of particles along a direction that intersects two crystallographic axes of the lattice. We find that, while single particles show no preferred direction, collective effects induce transversal current and directional locking at high density via a spontaneous symmetry breaking. The colloidal current may be polarized via an additional bias field that makes one transport direction energetically preferred.
An adjoint-based approach for synthesizing complex sound sources by discrete, grid-based monopoles in finite-difference time-domain simulations is presented. Previously [Stein et al., 2019a, J. Acoust. Soc. Am. 146(3), 1774–1785] demonstrated that the approach allows to consider unsteady and non-uniform ambient conditions such as wind flow and thermal gradient in contrast to standard methods of numerical sound field simulation. In this work, it is proven that not only ideal monopoles but also realistic sound sources with complex directivity characteristics can be synthesized. In detail, an oscillating circular piston and a real 2-way near-field monitor are modeled. The required number of monopoles in terms of the SPL deviation between the directivity of the original and the synthesized source is analyzed. Since the computational effort is independent of the number of monopoles used for the synthesis, also more complex sources can be reproduced by increasing the number of monopoles utilized. In contrast to classical least-square problem solvers, this does not increase the computational effort, which makes the method attractive for predicting the effect of sound reinforcement systems with highly directional sources under difficult acoustic boundary conditions.
We present the tamper-resistant broadcast abstraction of the Bitcoin blockchain, and show how it can be used to implement tamper-resistant replicated state machines. The tamper-resistant broadcast abstraction provides functionality to: broadcast, deliver, and verify messages. The tamper-resistant property ensures: 1) the probabilistic protection against byzantine behaviour, and 2) the probabilistic verifiability that no tampering has occurred.
In this work, we study various tamper-resistant broadcast protocols for: different environmental models (public/permissioned, bounded/unbounded, byzantine fault tolerant (BFT)/non-BFT, native/non-native); as well as different properties, such as ordering guarantees (FIFO-order, causal-order, total-order), and delivery guarantees (validity, agreement, uniform). This way, we can match the protocol to the required environment model and consistency model of the replicated state machine.
We implemented the tamper-resistant broadcast abstraction as a proof of concept. The results show that the implemented tamper-resistant broadcast protocols can compete on throughput and latency with other state-of-the-art broadcast technologies. Use cases, such as a tamper-resistant file system, supply chain tracking, and a timestamp server highlight the expressiveness of the abstraction.
In conclusion, the tamper-resistant broadcast protocols provide a powerful interface, with clear semantics and tunable settings, enabling the design of tamper-resistant applications.
Fast domain propagation of linear constraints has become a crucial component of today's best algorithms and solvers for mixed integer programming and pseudo-boolean optimization to achieve peak solving performance. Irregularities in the form of dynamic algorithmic behaviour, dependency structures, and sparsity patterns in the input data make efficient implementations of domain propagation on GPUs and, more generally, on parallel architectures challenging. This is one of the main reasons why domain propagation in state-of-the-art solvers is single thread only. In this paper, we present a new algorithm for domain propagation which (a) avoids these problems and allows for an efficient implementation on GPUs, and is (b) capable of running propagation rounds entirely on the GPU, without any need for synchronization or communication with the CPU. We present extensive computational results which demonstrate the effectiveness of our approach and show that ample speedups are possible on practically relevant problems: on state-of-the-art GPUs, our geometric mean speed-up for reasonably-large instances is around 10x to 20x and can be as high as 195x on favorably-large instances.
Massive Parallelization for Finding Shortest Lattice Vectors Based on Ubiquity Generator Framework
(2020)
Lattice-based cryptography has received attention as a next-generation encryption technique, because it is believed to be secure against attacks by classical and quantum computers. Its essential security depends on the hardness of solving the shortest vector problem (SVP). In the cryptography, to determine security levels, it is becoming significantly more important to estimate the hardness of the SVP by high-performance computing. In this study, we develop the world’s first distributed and asynchronous parallel SVP solver, the MAssively Parallel solver for SVP (MAP-SVP). It can parallelize algorithms for solving the SVP by applying the Ubiquity Generator framework, which is a generic framework for branch-and-bound algorithms. The MAP-SVP is suitable for massive-scale parallelization, owing to its small memory footprint, low communication overhead, and rapid checkpoint and restart mechanisms. We demonstrate its performance and scalability of the MAP-SVP by using up to 100,032 cores to solve instances of the Darmstadt SVP Challenge.
Mixed-integer programming (MIP) problem is arguably among the hardest classes of optimization problems. This paper describes how we solved 21 previously unsolved MIP instances from the MIPLIB benchmark sets. To achieve these results we used an enhanced version of ParaSCIP, setting a new record for the largest scale MIP computation: up to 80,000 cores in parallel on the Titan supercomputer. In this paper, we describe the basic parallelization mechanism of ParaSCIP, improvements of the dynamic load balancing and novel techniques to exploit the power of parallelization for MIP solving. We give a detailed overview of computing times and statistics for solving open MIPLIB instances.
UG is a generic framework to parallelize branch-and-bound based solvers (e.g., MIP, MINLP, ExactIP) in a distributed or shared memory computing environment. It exploits the powerful performance of state-of-the-art "base solvers", such as SCIP, CPLEX, etc. without the need for base solver parallelization.
UG framework, ParaSCIP(ug[SCIP,MPI]) and FiberSCIP (ug[SCIP,Pthreads]) are available as a beta version. For MIP solving, ParaSCIP and FiberSCIP are well debugged and should be stable. For MINLP solving, they are relatively stable, but not as thoroughly debugged. This release version should handle branch-and-cut approaches where subproblems are defined by variable bounds and also by constrains for ug[SCIP,*] ParaSCIP and FiberSCIP). Therefore, problem classes other than MIP or MINLP can be handled, but they have not been tested yet.
v0.9.1: Update orbitope cip files.
Recently, Kronqvist et al. (J Global Optim 64(2):249–272, 2016) rediscovered the supporting hyperplane algorithm of Veinott (Oper Res 15(1):147–152, 1967) and demonstrated its computational benefits for solving convex mixed integer nonlinear programs. In this paper we derive the algorithm from a geometric point of view. This enables us to show that the supporting hyperplane algorithm is equivalent to Kelley’s cutting plane algorithm (J Soc Ind Appl Math 8(4):703–712, 1960) applied to a particular reformulation of the problem. As a result, we extend the applicability of the supporting hyperplane algorithm to convex problems represented by a class of general, not necessarily convex nor differentiable, functions.
Maximal Quadratic-Free Sets
(2020)
The intersection cut paradigm is a powerful framework that facilitates the generation of valid linear inequalities, or cutting planes, for a potentially complex set S. The key ingredients in this construction are a simplicial conic relaxation of S and an S-free set: a convex zone whose interior does not intersect S. Ideally, such S-free set would be maximal inclusion-wise, as it would generate a deeper cutting plane. However, maximality can be a challenging goal in general. In this work, we show how to construct maximal S-free sets when S is defined as a general quadratic inequality. Our maximal S-free sets are such that efficient separation of a vertex in LP-based approaches to quadratically constrained problems is guaranteed. To the best of our knowledge, this work is the first to provide maximal quadratic-free sets.
This master thesis investigates the use and behaviour of a mixed finite element formulation for the simulation of garments.
The garment is modelled as an isotropic shell and is related to its mid-surface by energetic degeneration. Based on this, an energy functional is constructed, which contains the deformation and the mid-surface vector as degree of freedom. It is then shown why this problem does not correspond to a saddle point problem, but to a non-convex energy minimization.
The implementation of the energy minimization takes place with the ZIB-internal FE framework Kaskade7.4, whereby a geometric linear and different geometric non-linear problems are examined, whereby for a selected, non-linear example a comparison is made with an existing implementation on basis of Morley elements.
The further evaluations include the analysis of the quantitative and qualitative results, the used solution method, the behaviour of the system energy as well as the used CPU time.
Network and Storage
(2020)
Natural gas is considered by many to be the most important energy source for the future. The objectives of energy commodities strategic problems can be mainly related to natural gas and deal with the definition of the “optimal” gas pipelines design which includes a number of related sub problems such as: Gas stations (compression) location and Gas storage locations, as well as compression station design and optimal operation.
While graph covering is a fundamental and well-studied problem, this field lacks a broad and unified literature review. The holistic overview of graph covering given in this article attempts to close this gap. The focus lies on a characterization and classification of the different problems discussed in the literature. In addition, notable results and common approaches are also included. Whenever appropriate, our review extends to the corresponding partioning problems.
A prerequisite for many analysis tasks in modern comparative biology is the segmentation of 3-dimensional (3D) images of the specimens being investigated (e.g. from microCT data). Depending on the specific imaging technique that was used to acquire the images and on the image resolution, different segmentation tools will be required. While some standard tools exist that can often be applied for specific subtasks, building whole processing pipelines solely from standard tools is often difficult. Some tasks may even necessitate the implementation of manual interaction tools to achieve a quality that is sufficient for the subsequent analysis. In this work, we present a pipeline of segmentation tools that can be used for the semi-automatic segmentation and quantitative analysis of voids in tissue (i.e. internal structural porosity). We use this pipeline to analyze lacuno-canalicular networks in stingray tesserae from 3D images acquired with synchrotron microCT.
* The first step of this processing pipeline, the segmentation of the tesserae, was performed using standard marker-based watershed segmentation. The efficient processing of the next two steps, that is, the segmentation of all lacunae spaces belonging to a specific tessera and the separation of these spaces into individual lacunae required modern, recently developed tools.
* For proofreading, we developed a graph-based interactive method that allowed us to quickly split lacunae that were accidentally merged, and to merge lacunae that were wrongly split.
* Finally, the tesserae and their corresponding lacunae were subdivided into anatomical regions of interest (structural wedges) using a semi- manual approach.
A prerequisite for many analysis tasks in modern comparative biology is the segmentation of 3-dimensional (3D) images of the specimens being investigated (e.g. from microCT data). Depending on the specific imaging technique that was used to acquire the images and on the image resolution, different segmentation tools will be required. While some standard tools exist that can often be applied for specific subtasks, building whole processing pipelines solely from standard tools is often difficult. Some tasks may even necessitate the implementation of manual interaction tools to achieve a quality that is sufficient for the subsequent analysis. In this work, we present a pipeline of segmentation tools that can be used for the semi-automatic segmentation and quantitative analysis of voids in tissue (i.e. internal structural porosity). We use this pipeline to analyze lacuno-canalicular networks in stingray tesserae from 3D images acquired with synchrotron microCT.
* The first step of this processing pipeline, the segmentation of the tesserae, was performed using standard marker-based watershed segmentation. The efficient processing of the next two steps, that is, the segmentation of all lacunae spaces belonging to a specific tessera and the separation of these spaces into individual lacunae required modern, recently developed tools.
* For proofreading, we developed a graph-based interactive method that allowed us to quickly split lacunae that were accidentally merged, and to merge lacunae that were wrongly split.
* Finally, the tesserae and their corresponding lacunae were subdivided into anatomical regions of interest (structural wedges) using a semi- manual approach.
The images of D’Arcy Wentworth Thompson’s book “On Growth and Form” got an iconic status and became influential for biometrics and other mathematical approaches to organismic form. In particular, this is true for those of the chapter on the theory of transformation, which even has an impact on art and humanities. Based on his approach, Thompson formulated far-reaching conclusions with a partly anti-Darwinian stance. Here, we use the example of Thompson’s transformation of crab carapaces to test to what degree the transformation of grids, landmarks, and shapes result in congruent images. For comparison, we applied the same series of tests to digitized carapaces of real crabs. Both approaches show similar results. Only the simple transformations show a reasonable form of congruence. In particular, the transformations to majoid spider crabs reveal a complicated transformation of grids with partly crossing lines. By contrast, the carapace of the lithodid species is relatively easily created despite the fact that it is no brachyuran, but evolved a spider crab-like shape convergently from a hermit crab ancestor.
Automatic recognition of surgical phases is an important component for developing an intra-operative context-aware system. Prior work in this area focuses on recognizing short-term tool usage patterns within surgical phases. However, the difference between intra- and inter-phase tool usage patterns has not been investigated for automatic phase recognition. We developed a Recurrent Neural Network (RNN), in particular a state-preserving Long Short Term Memory (LSTM) architecture to utilize the long-term evolution of tool usage within complete surgical procedures. For fully automatic tool presence detection from surgical video frames, a Convolutional Neural Network (CNN) based architecture namely ZIBNet is employed. Our proposed approach outperformed EndoNet by 8.1% on overall precision for phase detection tasks and 12.5% on meanAP for tool recognition tasks.
Motivation: The ever-rising volume of patients, high maintenance cost of operating rooms and time consuming analysis of surgical skills are fundamental problems that hamper the practical training of the next generation of surgeons. The hospitals prefer to keep the surgeons busy in real operations over training young surgeons for obvious economic reasons. One fundamental need in surgical training is the reduction of the time needed by the senior surgeon to review the endoscopic procedures performed by the young surgeon while minimizing the subjective bias in evaluation. The unprecedented performance of deep learning ushers the new age of data-driven automatic analysis of surgical skills.
Method: Deep learning is capable of efficiently analyzing thousands of hours of laparoscopic video footage to provide an objective assessment of surgical skills. However, the traditional end-to-end setting of deep learning (video in, skill assessment out) is not explainable. Our strategy is to utilize the surgical process modeling framework to divide the surgical process into understandable components. This provides the opportunity to employ deep learning for superior yet automatic detection and evaluation of several aspects of laparoscopic cholecystectomy such as surgical tool and phase detection.
We employ ZIBNet for the detection of surgical tool presence. ZIBNet employs pre-processing based on tool usage imbalance, a transfer learned 50-layer residual network (ResNet-50) and temporal smoothing. To encode the temporal evolution of tool usage (over the entire video sequence) that relates to the surgical phases, Long Short Term Memory (LSTM) units are employed with long-term dependency.
Dataset: We used CHOLEC 80 dataset that consists of 80 videos of laparoscopic cholecystectomy performed by 13 surgeons, divided equally for training and testing. In these videos, up to three different tools (among 7 types of tools) can be present in a frame.
Results: The mean average precision of the detection of all tools is 93.5 ranging between 86.8 and 99.3, a significant improvement (p <0.01) over the previous state-of-the-art. We observed that less frequent tools like Scissors, Irrigator, Specimen Bag etc. are more related to phase transitions. The overall precision (recall) of the detection of all surgical phases is 79.6 (81.3).
Conclusion: While this is not the end goal for surgical skill analysis, the development of such a technological platform is essential toward a data-driven objective understanding of surgical skills. In future, we plan to investigate surgeon-in-the-loop analysis and feedback for surgical skill analysis.
Surgical tool segmentation in endoscopic videos is an important component of computer assisted interventions systems. Recent success of image-based solutions using fully-supervised deep learning approaches can be attributed to the collection of big labeled datasets. However, the annotation of a big dataset of real videos can be prohibitively expensive and time consuming. Computer simulations could alleviate the manual labeling problem, however, models trained on simulated data do not generalize to real data. This work proposes a consistency-based framework for joint learning of simulated and real (unlabeled) endoscopic data to bridge this performance generalization issue. Empirical results on two data sets (15 videos of the Cholec80 and EndoVis'15 dataset) highlight the effectiveness of the proposed Endo-Sim2Real method for instrument segmentation. We compare the segmentation of the proposed approach with state-of-the-art solutions and show that our method improves segmentation both in terms of quality and quantity.
Urban transportation systems are subject to a high level of variation and fluctuation in demand over the day. When this variation and fluctuation are observed in both time and space, it is crucial to develop line plans that are responsive to demand. A multi-period line planning approach that considers a changing demand during the planning horizon is proposed. If such systems are also subject to limitations of resources, a dynamic transfer of resources from one line to another throughout the planning horizon should also be considered. A mathematical modelling framework is developed to solve the line planning problem with a cost-oriented approach considering transfer of resources during a finite length planning horizon of multiple periods. We use real-life public transportation network data for our computational results. We analyze whether or not multi-period solutions outperform single period solutions in terms of feasibility and relevant costs. The importance of demand variation on multi-period solutions is investigated. We evaluate the impact of resource transfer constraints on the effectiveness of solutions. We also study the effect of period lengths along with the problem parameters that are significant for and sensitive to the optimality of solutions.
We consider the problem of verifying linear properties of neural networks. Despite their success in many classification and prediction tasks, neural networks may return unexpected results for certain inputs. This is highly problematic with respect to the application of neural networks for safety-critical tasks, e.g. in autonomous driving. We provide an overview of algorithmic approaches that aim to provide formal guarantees on the behavior of neural networks. Moreover, we present new theoretical results with respect to the approximation of ReLU neural networks. On the other hand, we implement a solver for verification of ReLU neural networks which combines mixed integer programming (MIP) with specialized branching and approximation techniques. To evaluate its performance, we conduct an extensive computational study. For that we use test instances based on the ACAS Xu System and the MNIST handwritten digit data set. Our solver is publicly available and able to solve the verification problem for instances which do not have independent bounds for each input neuron.
Markov State Models (MSM) sind der Goldstandard zur Modellierung biomolekularer Dynamik, da sie die Identifizierung und Analyse metastabiler Zustände ermöglichen. Die robuste Perron-Cluster-Cluster-Analyse (PCCA+) ist ein verbreiteter Spectral-Clustering-Algorithmus, der für das Clustering hochdimensionaler MSM verwendet wird. Da die PCCA+ auf reversible Prozesse beschränkt ist, wird sie zur Generalisierten PCCA+ (G-PCCA) verallgemeinert, die geeignet ist, nichtreversible Prozesse aufzuklären. Bernhard Reuter untersucht hier mittels G-PCCA die nichtthermischen Auswirkungen von Mikrowellen auf die Proteindynamik. Dazu führt er molekulardynamische Nichtgleichgewichtssimulationen des Amyloid-β-(1–40)-Peptids durch und modelliert diese.
Molecular simulations of ligand–receptor interactions are a computational challenge, especially when their association- (‘on’-rate) and dissociation- (‘off’-rate) mechanisms are working on vastly differing timescales. One way of tackling this multiscale problem is to compute the free-energy landscapes, where molecular dynamics (MD) trajectories are used to only produce certain statistical ensembles. The approach allows for deriving the transition rates between energy states as a function of the height of the activation-energy barriers. In this article, we derive the association rates of the opioids fentanyl and N-(3-fluoro-1-phenethylpiperidin-4-yl)-N-phenyl propionamide (NFEPP) in a μ-opioid receptor by combining the free-energy landscape approach with the square-root-approximation method (SQRA), which is a particularly robust version of Markov modelling. The novelty of this work is that we derive the association rates as a function of the pH level using only an ensemble of MD simulations. We also verify our MD-derived insights by reproducing the in vitro study performed by the Stein Lab.
Molecular simulations of ligand-receptor interactions are a computational challenge, especially when their association- (``on''-rate) and dissociation- (``off''-rate) mechanisms are working on vastly differing timescales. In addition, the timescale of the simulations themselves is, in practice, orders of magnitudes smaller than that of the mechanisms; which further adds to the complexity of observing these mechanisms, and of drawing meaningful and significant biological insights from the simulation.
One way of tackling this multiscale problem is to compute the free-energy landscapes, where molecular dynamics (MD) trajectories are used to only produce certain statistical ensembles. The approach allows for deriving the transition rates between energy states as a function of the height of the activation-energy barriers. In this article, we derive the association rates of the opioids fentanyl and N-(3-fluoro-1-phenethylpiperidin-4-yl)- N-phenyl propionamide (NFEPP) in a $\mu$-opioid receptor by combining the free-energy landscape approach with the square-root-approximation method (SQRA), which is a particularly robust version of Markov modelling. The novelty of this work is that we derive the association rates as a function of the pH level using only an ensemble of MD simulations. We also verify our MD-derived insights by reproducing the in vitro study performed by the Stein Lab, who investigated the influence of pH on the inhibitory constant of fentanyl and NFEPP (Spahn et al. 2017).
MD simulations are far more accessible and cost-effective than in vitro and in vivo studies. Especially in the context of the current opioid crisis, MD simulations can aid in unravelling molecular functionality and assist in clinical decision-making; the approaches presented in this paper are a pertinent step forward in this direction.
The problem of determining the rate of rare events in dynamical systems is quite well-known but still difficult to solve. Recent attempts to overcome this problem exploit the fact that dynamic systems can be represented by a linear operator, such as the Koopman operator. Mathematically, the rare event problem comes down to the difficulty in finding invariant subspaces of these Koopman operators K. In this article, we describe a method to learn basis functions of invariant subspaces using an artificial neural Network.
In this paper, we introduce the Maximum Diversity Assortment Selection Problem (MADASS), which is a generalization of the 2-dimensional Cutting Stock Problem (2CSP). Given a set of rectangles and a rectangular container, the goal of 2CSP is to determine a subset of rectangles that can be placed in the container without overlapping, i.e., a feasible assortment, such that a maximum area is covered. In MADASS, we need to determine a set of feasible assortments, each of them covering a certain minimum threshold of the container, such that the diversity among them is maximized. Thereby, diversity is defined as minimum or average normalized Hamming-Distance of all assortment pairs. The MADASS Problem was used in the 11th AIMMS-MOPTA Competition in 2019. The methods we describe in this article and the computational results won the contest.
In the following, we give a definition of the problem, introduce a mathematical model and solution approaches, determine upper bounds on the diversity, and conclude with computational experiments conducted on test instances derived from the 2CSP literature.
This paper studies the empirical efficacy and benefits of using projection-free first-order methods in the form of Conditional Gradients, a.k.a. Frank-Wolfe methods, for training Neural Networks with constrained parameters. We draw comparisons both to current state-of-the-art stochastic Gradient Descent methods as well as across different variants of stochastic Conditional Gradients. In particular, we show the general feasibility of training Neural Networks whose parameters are constrained by a convex feasible region using Frank-Wolfe algorithms and compare different stochastic variants. We then show that, by choosing an appropriate region, one can achieve performance exceeding that of unconstrained stochastic Gradient Descent and matching state-of-the-art results relying on L2-regularization. Lastly, we also demonstrate that, besides impacting performance, the particular choice of constraints can have a drastic impact on the learned representations.
We present an automated method for extrapolating missing
regions in label data of the skull in an anatomically plausible manner. The ultimate goal is to design patient-specic cranial implants for correcting large, arbitrarily shaped defects of the skull that can, for example, result from trauma of the head. Our approach utilizes a 3D statistical shape model (SSM) of the skull and a 2D generative adversarial network (GAN) that is trained in an unsupervised fashion from samples of healthy patients alone. By tting the SSM to given input labels containing the skull defect, a First approximation of the healthy state of the patient is obtained. The GAN is then applied to further correct and smooth the output of the SSM in an anatomically plausible manner. Finally, the defect region is extracted using morphological operations and subtraction between the extrapolated healthy state of the patient and the defective input labels. The method is trained and evaluated based on data from the MICCAI 2020 AutoImplant challenge. It produces state-of-the art results on regularly
shaped cut-outs that were present in the training and testing data of the challenge. Furthermore, due to unsupervised nature of the approach, the method generalizes well to previously unseen defects of varying shapes that were only present in the hidden test dataset.
Growing demand, distributed generation, such as renewable energy sources (RES), and the increasing role of storage systems to mitigate the volatility of RES on a medium voltage level, push existing distribution grids to their limits. Therefore, necessary network expansion needs to be evaluated to guarantee a safe and reliable electricity supply in the future taking these challenges into account. This problem is formulated as an optimal power flow (OPF) problem which combines network expansion, volatile generation and storage systems, minimizing network expansion and generation costs. As storage systems introduce a temporal coupling into the system, a multiperiod OPF problem is needed and analysed in this thesis. To reduce complexity, the network expansion problem is represented in a continuous nonlinear programming formulation by using fundamental properties of electrical engeneering. This formulation is validated succesfully against a common mixed integer programming approach on a 30 and 57 bus network with respect to solution and computing time. As the OPF problem is, in general, a nonconvex, nonlinear problem and, thus, hard to solve, convex relaxations of the power flow equations have gained increasing interest. Sufficient conditions are represented which guarantee exactness of a second-order cone (SOC) relaxation of an operational OPF in radial networks. In this thesis, these conditions are enhanced for the network expansion planning problem. Additionally, nonconvexities introduced by the choice of network expansion variables are relaxed by using McCormick envelopes. These relaxations are then applied on the multiperiod OPF and compared to the original problem on a 30 and a 57 bus network. In particular, the computational time is decreased by an order up to 10^2 by the SOC relaxation while it provides either an exact solution or a sufficient lower bound on the original problem. Finally, a sensitivity study is performed on weights of network expansion costs showing strong dependency of both the solution of performed expansion and solution time on the chosen weights.
This is the documentation on current results of a research project jointly conducted by Stiftung Deutsche Kinemathek (SDK) and Zuse Institute Berlin (ZIB). In this project, we are working on a practical yet sustainable archiving solution for audiovisual material.
In the course of the project two major obstacles were identified: 1) Metadata is collected according to standards established in the community but lacking a prescribed serialisation format. 2) Storage size of audiovisual material and time scales of production processes make it often impractical to defer submission for archival storage until all components have arrived and can be processed in one go.
It is well understood that Bayesian decision theory and average case analysis are essentially identical. However, if one is interested in performing uncertainty quantification for a numerical task, it can be argued that the decision-theoretic framework is neither appropriate nor sufficient. To this end, we consider an alternative optimality criterion from Bayesian experimental design and study its implied optimal information in the numerical context. This information is demonstrated to differ, in general, from the information that would be used in an average-case-optimal numerical method. The explicit connection to Bayesian experimental design suggests several distinct regimes in which optimal probabilistic numerical methods can be developed.
We analytically determine Jacobi fields and parallel transports and compute geodesic regression in Kendall’s shape space. Using the derived expressions,
we can fully leverage the geometry via Riemannian optimization and thereby reduce the computational expense by several orders of magnitude over common,
nonlinear constrained approaches. The methodology is demonstrated by performing a longitudinal statistical analysis of epidemiological shape data. As an example
application we have chosen 3D shapes of knee bones, reconstructed from image
data of the Osteoarthritis Initiative (OAI). Comparing subject groups with incident and developing osteoarthritis versus normal controls, we find clear differences in the temporal development of femur shapes. This paves the way for early prediction of incident knee osteoarthritis, using geometry data alone.
One of the most fundamental ingredients in mixed-integer nonlinear programming solvers is the well- known McCormick relaxation for a product of two variables x and y over a box-constrained domain. The starting point of this paper is the fact that the convex hull of the graph of xy can be much tighter when computed over a strict, non-rectangular subset of the box. In order to exploit this in practice, we propose to compute valid linear inequalities for the projection of the feasible region onto the x-y-space by solving a sequence of linear programs akin to optimization-based bound tightening. These valid inequalities allow us to employ results from the literature to strengthen the classical McCormick relaxation. As a consequence, we obtain a stronger convexification procedure that exploits problem structure and can benefit from supplementary information obtained during the branch-and bound algorithm such as an objective cutoff. We complement this by a new bound tightening procedure that efficiently computes the best possible bounds for x, y, and xy over the available projections. Our computational evaluation using the academic solver SCIP exhibit that the proposed methods are applicable to a large portion of the public test library MINLPLib and help to improve performance significantly.
The most important ingredient for solving mixed-integer nonlinear programs (MINLPs) to global epsilon-optimality with spatial branch and bound is a tight, computationally tractable relaxation. Due to both theoretical and practical considerations, relaxations of MINLPs are usually required to be convex. Nonetheless, current optimization solver can often successfully handle a moderate presence of nonconvexities, which opens the door for the use of potentially tighter nonconvex relaxations. In this work, we exploit this fact and make use of a nonconvex relaxation obtained via aggregation of constraints: a surrogate relaxation. These relaxations were actively studied for linear integer programs in the 70s and 80s, but they have been scarcely considered since. We revisit these relaxations in an MINLP setting and show the computational benefits and challenges they can have. Additionally, we study a generalization of such relaxation that allows for multiple aggregations simultaneously and present the first algorithm that is capable of computing the best set of aggregations. We propose a multitude of computational enhancements for improving its practical performance and evaluate the algorithm’s ability to generate strong dual bounds through extensive computational experiments.
Stand-alone quantum dot-based single-photon source operating at telecommunication wavelengths
(2020)
Project plan4res (www.plan4res.eu) involves the development of a modular framework for the modeling and analysis of energy system strategies at the European level. It will include models describing the investment and operation decisions for a wide variety of technologies related to electricity and non-electricity energy sectors across generation, consumption, transmission and distribution. The modularity of the framework allows for detailed modelling of major areas of energy systems that can help stakeholders from different backgrounds to focus on specific topics related to the energy landscape in Europe and to receive relevant outputs and insights tailored to their needs. The current paper presents a qualitative description of key concepts and methods of the novel modular optimization framework and provides insights into the corresponding energy landscape.
More and more diseases have been found to be strongly correlated with disturbances in the microbiome constitution, e.g., obesity, diabetes, or some cancer types. Thanks to modern high-throughput omics technologies, it becomes possible to directly analyze human microbiome and its influence on the health status. Microbial communities are monitored over long periods of time and the associations between their members are explored. These relationships can be described by a time-evolving graph. In order to understand responses of the microbial community members to a distinct range of perturbations such as antibiotics exposure or diseases and general dynamical properties, the time-evolving graph of the human microbial communities has to be analyzed. This becomes especially challenging due to dozens of complex interactions among microbes and metastable dynamics. The key to solving this problem is the representation of the time-evolving graphs as fixed-length feature vectors preserving the original dynamics. We propose a method for learning the embedding of the time-evolving graph that is based on the spectral analysis of transfer operators and graph kernels. We demonstrate that our method can capture temporary changes in the time-evolving graph on both synthetic data and real-world data. Our experiments demonstrate the efficacy of the method. Furthermore, we show that our method can be applied to human microbiome data to study dynamic processes.
One of the fundamental steps in the optimization of public transport is line planning. It involves determining lines and assigning frequencies of service such that costs are minimized while also maximizing passenger comfort and satisfying travel demands. We formulate the problem as a mixed integer linear program that considers all circuit-like lines in a graph and allows free passenger routing. Traveler and operator costs are included in a linear scalarization in the objective. We apply said programming problem to the Parametric City, which is a graph model introduced by Fielbaum, Jara-Díaz and Gschwender that exibly represents different cities. In his dissertation, Fielbaum solved the line planning problem for various parameter choices in the Parametric City. In a first step, we therefore review his results and make comparative computations. Unlike Fielbaum we arrive at the conclusion that the optimal line plan for this model indeed depends on the demand. Consequently, we analyze the line planning problem in-depth: We find equivalent, but easier to compute formulations and provide a lower bound by LP-relaxation, which we show to be equivalent to a multi-commodity flow problem. Further, we examine what impact symmetry has on the solutions. Supported both by computational results as well as by theoretical analysis, we reach the conclusion that symmetric line plans are optimal or near-optimal in the Parametric City. Restricting the model to symmetric line plans allows for a \kappa-factor approximation algorithm for the line planning problem in the Parametric City.