Refine
Year of publication
- 2020 (122) (remove)
Document Type
- Article (122) (remove)
Keywords
Institute
- Modeling and Simulation of Complex Processes (47)
- Numerical Mathematics (46)
- Visual Data Analysis (26)
- Visual and Data-centric Computing (26)
- Computational Nano Optics (23)
- Applied Algorithmic Intelligence Methods (15)
- Mathematical Optimization (15)
- Distributed Algorithms and Supercomputing (14)
- Image Analysis in Biology and Materials Science (11)
- Mathematical Algorithmic Intelligence (10)
The temporally and spatially resolved tracking of lithium intercalation and electrode degradation processes are crucial for detecting and understanding performance losses during the operation of lithium-batteries. Here, high-throughput X-ray computed tomography has enabled the identification of mechanical degradation processes in a commercial Li/MnO2 primary battery and the indirect tracking of lithium diffusion; furthermore, complementary neutron computed tomography has identified the direct lithium diffusion process and the electrode wetting by the electrolyte. Virtual electrode unrolling techniques provide a deeper view inside the electrode layers and are used to detect minor fluctuations which are difficult to observe using conventional three dimensional rendering tools. Moreover, the ‘unrolling’ provides a platform for correlating multi-modal image data which is expected to find wider application in battery science and engineering to study diverse effects e.g. electrode degradation or lithium diffusion blocking during battery cycling.
Conflicting hypotheses about the relationships among the major lineages of aculeate Hymenoptera clearly show the necessity of detailed comparative morphological studies. Using micro-computed tomography and 3D reconstructions, the skeletal musculature of the meso- and metathorax and the first and second abdominal segment in Apoidea are described. Females of Sceliphron destillatorium, Sphex (Fernaldina) lucae (both Sphecidae), and Ampulex compressa (Ampulicidae) were examined. The morphological terminology provided by the Hymenoptera Anatomy Ontology is used. Up to 42 muscles were found. The three species differ in certain numerical and structural aspects. Ampulicidae differs significantly from Sphecidae in the metathorax and the anterior abdomen. The metapleural apodeme and paracoxal ridge are weakly developed in Ampulicidae, which affect some muscular structures. Furthermore, the muscles that insert on the coxae and trochanters are broader and longer in Ampulicidae. A conspicuous characteristic of Sphecidae is the absence of the metaphragma. Overall, we identified four hitherto unrecognized muscles. Our work suggests additional investigations on structures discussed in this paper.
We introduce a concurrent solver for the periodic event scheduling problem (PESP). It combines mixed integer programming techniques, the modulo network simplex method, satisfiability approaches, and a new heuristic based on maximum cuts. Running these components in parallel speeds up the overall solution process. This enables us to significantly improve the current upper and lower bounds for all benchmark instances of the library PESPlib.
A Physarum-Inspired Algorithm for Minimum-Cost Relay Node Placement in Wireless Sensor Networks
(2020)
A Polyhedral Study of Event-Based Models for the Resource-Constrained Project Scheduling Problem
(2020)
We consider event-based Mixed-Integer Programming (MIP) models for the Resource-Constrained Project Scheduling Problem (RCPSP) that represent an alternative to the common time-indexed model (DDT) of Pritsker et al. (1969) for the case where the underlying time horizon is large or job processing times are subject to huge variations. In contrast to the time-indexed model, the size of event-based models does not depend on the time horizon. For two event-based formulations OOE and SEE of Koné et al. (2011) we present new valid inequalities that dominate the original formulation. Additionally, we introduce a new event-based model: the Interval Event-Based Model (IEE). We deduce linear transformations between all three models that yield the strict domination order IEE > SEE > OOE for their linear programming (LP) relaxations, meaning that IEE has the strongest linear relaxation among the event-based models. We further show that the popular DDT formulation can be retrieved from IEE by certain polyhedral operations, thus giving a unifying view on a complete branch of MIP formulations for the RCPSP. In addition, we analyze the computational performance of all presented models on test instances of the PSPLIB (Kolisch and Sprecher 1997).
Structure-based virtual screening approaches have the ability to dramatically reduce the time and costs associated to the discovery of new drug candidates. Studies have shown that the true hit rate of virtual screenings improves with the scale of the screened ligand libraries. Therefore, we have recently developed an open source drug discovery platform (VirtualFlow), which is able to routinely carry out ultra-large virtual screenings. One of the primary challenges of molecular docking is the circumstance when the protein is highly dynamic or when the structure of the protein cannot be captured by a static pose. To accommodate protein dynamics, we report the extension of VirtualFlow to allow the docking of ligands using a grey wolf optimization algorithm using the docking program GWOVina, which substantially improves the quality and efficiency of flexible receptor docking compared to AutoDock Vina. We demonstrate the linear scaling behavior of VirtualFlow utilizing GWOVina up to 128 000 CPUs. The newly supported docking method will be valuable for drug discovery projects in which protein dynamics and flexibility play a significant role.
Motivated by the desire to numerically calculate rigorous upper and lower bounds on deviation probabilities over large classes of probability distributions, we present an adaptive algorithm for the reconstruction of increasing real-valued functions. While this problem is similar to the classical statistical problem of isotonic regression, the optimisation setting alters several characteristics of the problem and opens natural algorithmic possibilities. We present our algorithm, establish sufficient conditions for convergence of the reconstruction to the ground truth, and apply the method to synthetic test cases and a real-world example of uncertainty quantification for aerodynamic design.
We consider the problem of verifying linear properties of neural networks. Despite their success in many classification and prediction tasks, neural networks may return unexpected results for certain inputs. This is highly problematic with respect to the application of neural networks for safety-critical tasks, e.g. in autonomous driving. We provide an overview of algorithmic approaches that aim to provide formal guarantees on the behavior of neural networks. Moreover, we present new theoretical results with respect to the approximation of ReLU neural networks. On the other hand, we implement a solver for verification of ReLU neural networks which combines mixed integer programming (MIP) with specialized branching and approximation techniques. To evaluate its performance, we conduct an extensive computational study. For that we use test instances based on the ACAS Xu System and the MNIST handwritten digit data set. Our solver is publicly available and able to solve the verification problem for instances which do not have independent bounds for each input neuron.
Research software has become a central asset in academic research. It optimizes existing and enables new research methods, implements and embeds research knowledge, and constitutes an essential research product in itself. Research software must be sustainable in order to understand, replicate, reproduce, and build upon existing research or conduct new research effectively. In other words, software must be available, discoverable, usable, and adaptable to new needs, both now and in the future. Research software therefore requires an environment that supports sustainability. Hence, a change is needed in the way research software development and maintenance are currently motivated, incentivized, funded, structurally and infrastructurally supported, and legally treated. Failing to do so will threaten the quality and validity of research. In this paper, we identify challenges for research software sustainability in Germany and beyond, in terms of motivation, selection, research software engineering personnel, funding, infrastructure, and legal aspects. Besides researchers, we specifically address political and academic decision-makers to increase awareness of the importance and needs of sustainable research software practices. In particular, we recommend strategies and measures to create an environment for sustainable research software, with the ultimate goal to ensure that software-driven research is valid, reproducible and sustainable, and that software is recognized as a first class citizen in research. This paper is the outcome of two workshops run in Germany in 2019, at deRSE19 - the first International Conference of Research Software Engineers in Germany - and a dedicated DFG-supported follow-up workshop in Berlin.
Dual degeneracy, i.e., the presence of multiple optimal bases to a linear programming (LP) problem, heavily affects the solution process of mixed integer programming (MIP) solvers. Different optimal bases lead to different cuts being generated, different branching decisions being taken and different solutions being found by primal heuristics. Nevertheless, only a few methods have been published that either avoid or exploit dual degeneracy. The aim of the present paper is to conduct a thorough computational study on the presence of dual degeneracy for the instances of well-known public MIP instance collections. How many instances are affected by dual degeneracy? How degenerate are the affected models? How does branching affect degeneracy: Does it increase or decrease by fixing variables? Can we identify different types of degenerate MIPs? As a tool to answer these questions, we introduce a new measure for dual degeneracy: the variable–constraint ratio of the optimal face. It provides an estimate for the likelihood that a basic variable can be pivoted out of the basis. Furthermore, we study how the so-called cloud intervals—the projections of the optimal face of the LP relaxations onto the individual variables—evolve during tree search and the implications for reducing the set of branching candidates.
On average, an approved drug today costs $2-3 billion and takes over ten years to develop1. In part, this is due to expensive and time-consuming wet-lab experiments, poor initial hit compounds, and the high attrition rates in the (pre-)clinical phases. Structure-based virtual screening (SBVS) has the potential to mitigate these problems. With SBVS, the quality of the hits improves with the number of compounds screened2. However, despite the fact that large compound databases exist, the ability to carry out large-scale SBVSs on computer clusters in an accessible, efficient, and flexible manner has remained elusive. Here we designed VirtualFlow, a highly automated and versatile open-source platform with perfect scaling behaviour that is able to prepare and efficiently screen ultra-large ligand libraries of compounds. VirtualFlow is able to use a variety of the most powerful docking programs. Using VirtualFlow, we have prepared the largest and freely available ready-to-dock ligand library available, with over 1.4 billion commercially available molecules. To demonstrate the power of VirtualFlow, we screened over 1 billion compounds and discovered a small molecule inhibitor (iKeap1) that engages KEAP1 with nanomolar affinity (Kd = 114 nM) and disrupts the interaction between KEAP1 and the transcription factor NRF2. We also identified a set of structurally diverse molecules that bind to KEAP1 with submicromolar affinity. This illustrates the potential of VirtualFlow to access vast regions of the chemical space and identify binders with high affinity for target proteins.
We present visual methods for the analysis and comparison of the results of curved fibre reconstruction algorithms, i.e., of algorithms extracting characteristics of curved fibres from X-ray computed tomography scans. In this work, we extend previous methods for the analysis and comparison of results of different fibre reconstruction algorithms or parametrisations to the analysis of curved fibres. We propose fibre dissimilarity measures for such curved fibres and apply these to compare multiple results to a specified reference. We further propose visualisation methods to analyse differences between multiple results quantitatively and qualitatively. In two case studies, we show that the presented methods provide valuable insights for advancing and parametrising fibre reconstruction algorithms, and support in improving their results in characterising curved fibres.
Model-based optimal designs of experiments (M-bODE) for nonlinear models are typically hard to compute. The literature on the computation of M-bODE for nonlinear models when the covariates are categorical variables, i.e. factorial experiments, is scarce. We propose second order cone programming (SOCP) and Mixed Integer Second Order Programming (MISOCP) formulations to find, respectively, approximate and exact A- and D-optimal designs for 2𝑘 factorial experiments for Generalized Linear Models (GLMs). First, locally optimal (approximate and exact) designs for GLMs are addressed using the formulation of Sagnol (J Stat Plan Inference 141(5):1684–1708, 2011). Next, we consider the scenario where the parameters are uncertain, and new formulations are proposed to find Bayesian optimal designs using the A- and log det D-optimality criteria. A quasi Monte-Carlo sampling procedure based on the Hammersley sequence is used for computing the expectation in the parametric region of interest. We demonstrate the application of the algorithm with the logistic, probit and complementary log–log models and consider full and fractional factorial designs.
We present an automated method for extrapolating missing
regions in label data of the skull in an anatomically plausible manner. The ultimate goal is to design patient-specic cranial implants for correcting large, arbitrarily shaped defects of the skull that can, for example, result from trauma of the head. Our approach utilizes a 3D statistical shape model (SSM) of the skull and a 2D generative adversarial network (GAN) that is trained in an unsupervised fashion from samples of healthy patients alone. By tting the SSM to given input labels containing the skull defect, a First approximation of the healthy state of the patient is obtained. The GAN is then applied to further correct and smooth the output of the SSM in an anatomically plausible manner. Finally, the defect region is extracted using morphological operations and subtraction between the extrapolated healthy state of the patient and the defective input labels. The method is trained and evaluated based on data from the MICCAI 2020 AutoImplant challenge. It produces state-of-the art results on regularly
shaped cut-outs that were present in the training and testing data of the challenge. Furthermore, due to unsupervised nature of the approach, the method generalizes well to previously unseen defects of varying shapes that were only present in the hidden test dataset.
This article is mainly motivated by the urge to answer two kinds of questions regarding the Bundesliga, which is Germany’s primary football (soccer) division having the highest average stadium attendance worldwide: “At any point in the season, what is the lowest final rank a certain team can achieve?” and “At any point in the season, what is the highest final rank a certain team can achieve?”. Although we focus on the Bundesliga in particular, the integer programming formulations we introduce to answer these questions can easily be adapted to a variety of other league systems and tournaments.
This study’s objective was the generation of a standardized geometry of the healthy nasal cavity.
An average geometry of the healthy nasal cavity was generated using a statistical shape model based on 25 symptom-free subjects. Airflow within the average geometry and these geometries was calculated using fluid simulations. Integral measures of the nasal resistance, wall shear stresses (WSS) and velocities were calculated as well as cross-sectional areas (CSA). Furthermore, individual WSS and static pressure distributions were mapped onto the average geometry.
The average geometry featured an overall more regular shape that resulted in less resistance, reduced wall shear stresses and velocities compared to the median of the 25 geometries. Spatial distributions of WSS and pressure of average geometry agreed well compared to the average distributions of all individual geometries. The minimal CSA of the average geometry was larger than the median of all individual geometries (83.4 vs. 74.7 mm²).
The airflow observed within the average geometry of the healthy nasal cavity did not equal the average airflow of the individual geometries. While differences observed for integral measures were notable, the calculated values for the average geometry lay within the distributions of the individual parameters. Spatially resolved parameters differed less prominently.
In most vertebrates the embryonic cartilaginous skeleton is replaced by bone during development. During this process, cartilage cells (chondrocytes) mineralize the extracellular matrix and undergo apoptosis, giving way to bone cells (osteocytes). In contrast, sharks and rays (elasmobranchs) have cartilaginous skeletons throughout life, where only the surface mineralizes, forming a layer of tiles (tesserae). Elasmobranch chondrocytes, unlike those of other vertebrates, survive cartilage mineralization and are maintained alive in spaces (lacunae) within tesserae. However, the function(s) of the chondrocytes in the mineralized tissue remain unknown. Applying a custom analysis workflow to high-resolution synchrotron microCT scans of tesserae, we characterize the morphologies and arrangements of stingray chondrocyte lacunae, using lacunar morphology as a proxy for chondrocyte morphology. We show that the cell density is comparable in unmineralized and mineralized tissue from our study species and that cells maintain the similar volume even when they have been incorporated into tesserae. This discovery supports previous hypotheses that elasmobranch chondrocytes, unlike those of other taxa, do not proliferate, hypertrophy or undergo apoptosis during mineralization. Tessera lacunae show zonal variation in their shapes—being flatter further from and more spherical closer to the unmineralized cartilage matrix and larger in the center of tesserae— and show pronounced organization into parallel layers and strong orientation toward neighboring tesserae. Tesserae also exhibit local variation in lacunar density, with the density considerably higher near pores passing through the tesseral layer, suggesting pores and cells interact (e.g. that pores contain a nutrient source). We hypothesize that the different lacunar types reflect the stages of the tesserae formation process, while also representing local variation in tissue architecture and cell function. Lacunae are linked by small passages (canaliculi) in the matrix to form elongate series at the tesseral periphery and tight clusters in the center of tesserae, creating a rich connectivity among cells. The network arrangement and the shape variation of chondrocytes in tesserae indicate that cells may interact within and between tesserae and manage mineralization differently from chondrocytes in other vertebrates, perhaps performing analogous roles to osteocytes in bone.
We investigate the directional locking effects that arise when a monolayer of paramagnetic colloidal particles is driven across a triangular lattice of magnetic bubbles. We use an external rotating magnetic field to generate a two-dimensional traveling wave ratchet forcing the transport of particles along a direction that intersects two crystallographic axes of the lattice. We find that, while single particles show no preferred direction, collective effects induce transversal current and directional locking at high density via a spontaneous symmetry breaking. The colloidal current may be polarized via an additional bias field that makes one transport direction energetically preferred.
Phage display biopanning with Illumina next-generation sequencing (NGS) is applied to reveal insights into peptide-based adhesion domains for polypropylene (PP). One biopanning round followed by NGS selects robust PP-binding peptides that are not evident by Sanger sequencing. NGS provides a significant statistical base that enables motif analysis, statistics on positional residue depletion/enrichment, and data analysis to suppress false-positive sequences from amplification bias. The selected sequences are employed as water-based primers for PP?metal adhesion to condition PP surfaces and increase adhesive strength by 100\% relative to nonprimed PP.
Two essential ingredients of modern mixed-integer programming (MIP) solvers are diving heuristics that simulate a partial depth-first search in a branch-and-bound search tree and conflict analysis of infeasible subproblems to learn valid constraints. So far, these techniques have mostly been studied independently: primal heuristics under the aspect of finding high-quality feasible solutions early during the solving process and conflict analysis for fathoming nodes of the search tree and improving the dual bound. Here, we combine both concepts in two different ways. First, we develop a diving heuristic that targets the generation of valid conflict constraints from the Farkas dual. We show that in the primal this is equivalent to the optimistic strategy of diving towards the best bound with respect to the objective function. Secondly, we use information derived from conflict analysis to enhance the search of a diving heuristic akin to classical coefficient diving. The computational performance of both methods is evaluated using an implementation in the source-open MIP solver SCIP. Experiments are carried out on publicly available test sets including Miplib 2010 and Cor@l.
As the natural gas market is moving towards short-term planning, accurate and robust short-term forecasts of the demand and supply of natural gas is of fundamental importance for a stable energy supply, a natural gas control schedule, and transport operation on a daily basis. We propose a hybrid forecast model, Functional AutoRegressive and Convolutional Neural Network model, based on state-of-the-art statistical modeling and artificial neural networks. We conduct short-term forecasting of the hourly natural gas flows of 92 distribution nodes in the German high-pressure gas pipeline network, showing that the proposed model provides nice and stable accuracy for different types of nodes. It outperforms all the alternative models, with an improved relative accuracy up to twofold for plant nodes and up to fourfold for municipal nodes. For the border nodes with rather flat gas flows, it has an accuracy that is comparable to the best performing alternative model.
This paper studies the empirical efficacy and benefits of using projection-free first-order methods in the form of Conditional Gradients, a.k.a. Frank-Wolfe methods, for training Neural Networks with constrained parameters. We draw comparisons both to current state-of-the-art stochastic Gradient Descent methods as well as across different variants of stochastic Conditional Gradients. In particular, we show the general feasibility of training Neural Networks whose parameters are constrained by a convex feasible region using Frank-Wolfe algorithms and compare different stochastic variants. We then show that, by choosing an appropriate region, one can achieve performance exceeding that of unconstrained stochastic Gradient Descent and matching state-of-the-art results relying on L2-regularization. Lastly, we also demonstrate that, besides impacting performance, the particular choice of constraints can have a drastic impact on the learned representations.
An adjoint-based approach for synthesizing complex sound sources by discrete, grid-based monopoles in finite-difference time-domain simulations is presented. Previously [Stein et al., 2019a, J. Acoust. Soc. Am. 146(3), 1774–1785] demonstrated that the approach allows to consider unsteady and non-uniform ambient conditions such as wind flow and thermal gradient in contrast to standard methods of numerical sound field simulation. In this work, it is proven that not only ideal monopoles but also realistic sound sources with complex directivity characteristics can be synthesized. In detail, an oscillating circular piston and a real 2-way near-field monitor are modeled. The required number of monopoles in terms of the SPL deviation between the directivity of the original and the synthesized source is analyzed. Since the computational effort is independent of the number of monopoles used for the synthesis, also more complex sources can be reproduced by increasing the number of monopoles utilized. In contrast to classical least-square problem solvers, this does not increase the computational effort, which makes the method attractive for predicting the effect of sound reinforcement systems with highly directional sources under difficult acoustic boundary conditions.
Conformational dynamics is essential to biomolecular processes. Markov State Models (MSMs) are widely used to elucidate dynamic properties of molecular systems from unbiased Molecular Dynamics (MD). However, the implementation of reweighting schemes for MSMs to analyze biased simulations is still at an early stage of development. Several dynamical reweighing approaches have been proposed, which can be classified as approaches based on (i) Kramers rate theory, (ii) rescaling of the probability density flux, (iii) reweighting by formulating a likelihood function, (iv) path reweighting. We present the state-of-the-art and discuss the methodological differences of these methods, their limitations and recent applications.
The images of D’Arcy Wentworth Thompson’s book “On Growth and Form” got an iconic status and became influential for biometrics and other mathematical approaches to organismic form. In particular, this is true for those of the chapter on the theory of transformation, which even has an impact on art and humanities. Based on his approach, Thompson formulated far-reaching conclusions with a partly anti-Darwinian stance. Here, we use the example of Thompson’s transformation of crab carapaces to test to what degree the transformation of grids, landmarks, and shapes result in congruent images. For comparison, we applied the same series of tests to digitized carapaces of real crabs. Both approaches show similar results. Only the simple transformations show a reasonable form of congruence. In particular, the transformations to majoid spider crabs reveal a complicated transformation of grids with partly crossing lines. By contrast, the carapace of the lithodid species is relatively easily created despite the fact that it is no brachyuran, but evolved a spider crab-like shape convergently from a hermit crab ancestor.
The choice of solvents influences crystalline solid formed during the crystallization of active pharmaceutical ingredients (API). The underlying effects are not always well understood because of the complexity of the systems. Theoretical models are often insufficient to describe this phenomenon. In this study, the crystallization behavior of the model drug paracetamol in different solvents was studied based on experimental and molecular dynamics data. The crystallization process was followed in situ using time-resolved Raman spectroscopy. Molecular dynamics with simulated annealing algorithm was used for an atomistic understanding of the underlying processes. The experimental and theoretical data indicate that paracetamol molecules adopt a particular geometry in a given solvent predefining the crystallization of certain polymorphs.
Two-dimensional electronic spectra (2DES) provide unique ways to track the energy transfer dynamics in light-harvesting complexes. The interpretation of the peaks and structures found in experimentally recorded 2DES is often not straightforward, since several processes are imaged simultaneously. The choice of specific pulse polarization sequences helps to disentangle the sometimes convoluted spectra, but brings along other disturbances. We show by detailed theoretical calculations how 2DES of the Fenna-Matthews-Olson complex are affected by rotational and conformational disorder of the chromophores.
Finite-size corrections for the static structure factor of a liquid slab with open boundaries
(2020)
The presence of a confining boundary can modify the local structure of a liquid markedly. In addition, small samples of finite size are known to exhibit systematic deviations of thermodynamic quantities relative to their bulk values. Here, we consider the static structure factor of a liquid sample in slab geometry with open boundaries at the surfaces, which can be thought of as virtually cutting out the sample from a macroscopically large, homogeneous fluid. This situation is a relevant limit for the interpretation of grazing-incidence diffraction experiments at liquid interfaces and films. We derive an exact, closed expression for the slab structure factor, with the bulk structure factor as the only input. This shows that such free boundary conditions cause significant differences between the two structure factors, in particular, at small wavenumbers. An asymptotic analysis of this result yields the scaling exponent and an accurate, useful approximation of these finite-size corrections. Furthermore, the open boundaries permit the interpretation of the slab as an open system, supporting particle exchange with a reservoir. We relate the slab structure factor to the particle number fluctuations and discuss conditions under which the subvolume of the slab represents a grand canonical ensemble with chemical potential μ and temperature T. Thus, the open slab serves as a test-bed for the small-system thermodynamics in a μT reservoir. We provide a microscopically justified and exact result for the size dependence of the isothermal compressibility. Our findings are corroborated by simulation data for Lennard-Jones liquids at two representative temperatures.
We analytically determine Jacobi fields and parallel transports and compute geodesic regression in Kendall’s shape space. Using the derived expressions,
we can fully leverage the geometry via Riemannian optimization and thereby reduce the computational expense by several orders of magnitude over common,
nonlinear constrained approaches. The methodology is demonstrated by performing a longitudinal statistical analysis of epidemiological shape data. As an example
application we have chosen 3D shapes of knee bones, reconstructed from image
data of the Osteoarthritis Initiative (OAI). Comparing subject groups with incident and developing osteoarthritis versus normal controls, we find clear differences in the temporal development of femur shapes. This paves the way for early prediction of incident knee osteoarthritis, using geometry data alone.
More and more diseases have been found to be strongly correlated with disturbances in the microbiome constitution, e.g., obesity, diabetes, or some cancer types. Thanks to modern high-throughput omics technologies, it becomes possible to directly analyze human microbiome and its influence on the health status. Microbial communities are monitored over long periods of time and the associations between their members are explored. These relationships can be described by a time-evolving graph. In order to understand responses of the microbial community members to a distinct range of perturbations such as antibiotics exposure or diseases and general dynamical properties, the time-evolving graph of the human microbial communities has to be analyzed. This becomes especially challenging due to dozens of complex interactions among microbes and metastable dynamics. The key to solving this problem is the representation of the time-evolving graphs as fixed-length feature vectors preserving the original dynamics. We propose a method for learning the embedding of the time-evolving graph that is based on the spectral analysis of transfer operators and graph kernels. We demonstrate that our method can capture temporary changes in the time-evolving graph on both synthetic data and real-world data. Our experiments demonstrate the efficacy of the method. Furthermore, we show that our method can be applied to human microbiome data to study dynamic processes.
Molecular simulations of ligand–receptor interactions are a computational challenge, especially when their association- (‘on’-rate) and dissociation- (‘off’-rate) mechanisms are working on vastly differing timescales. One way of tackling this multiscale problem is to compute the free-energy landscapes, where molecular dynamics (MD) trajectories are used to only produce certain statistical ensembles. The approach allows for deriving the transition rates between energy states as a function of the height of the activation-energy barriers. In this article, we derive the association rates of the opioids fentanyl and N-(3-fluoro-1-phenethylpiperidin-4-yl)-N-phenyl propionamide (NFEPP) in a μ-opioid receptor by combining the free-energy landscape approach with the square-root-approximation method (SQRA), which is a particularly robust version of Markov modelling. The novelty of this work is that we derive the association rates as a function of the pH level using only an ensemble of MD simulations. We also verify our MD-derived insights by reproducing the in vitro study performed by the Stein Lab.
Hypersurfaces with defect
(2020)
A projective hypersurface X⊆P^n has defect if h^i(X) ≠ h^i(P^n) for some i∈{n,…,2n−2} in a suitable cohomology theory. This occurs for example when X⊆P^4 is not Q-factorial. We show that hypersurfaces with defect tend to be very singular: In characteristic 0, we present a lower bound on the Tjurina number, where X is allowed to have arbitrary isolated singularities. For X with mild singularities, we prove a similar result in positive characteristic. As an application, we obtain an estimate on the asymptotic density of hypersurfaces without defect over a finite field.
A prerequisite for many analysis tasks in modern comparative biology is the segmentation of 3-dimensional (3D) images of the specimens being investigated (e.g. from microCT data). Depending on the specific imaging technique that was used to acquire the images and on the image resolution, different segmentation tools will be required. While some standard tools exist that can often be applied for specific subtasks, building whole processing pipelines solely from standard tools is often difficult. Some tasks may even necessitate the implementation of manual interaction tools to achieve a quality that is sufficient for the subsequent analysis. In this work, we present a pipeline of segmentation tools that can be used for the semi-automatic segmentation and quantitative analysis of voids in tissue (i.e. internal structural porosity). We use this pipeline to analyze lacuno-canalicular networks in stingray tesserae from 3D images acquired with synchrotron microCT.
* The first step of this processing pipeline, the segmentation of the tesserae, was performed using standard marker-based watershed segmentation. The efficient processing of the next two steps, that is, the segmentation of all lacunae spaces belonging to a specific tessera and the separation of these spaces into individual lacunae required modern, recently developed tools.
* For proofreading, we developed a graph-based interactive method that allowed us to quickly split lacunae that were accidentally merged, and to merge lacunae that were wrongly split.
* Finally, the tesserae and their corresponding lacunae were subdivided into anatomical regions of interest (structural wedges) using a semi- manual approach.
Context. Cometary outgassing is induced by the sublimation of ices and the ejection of dust originating from the nucleus. Therefore measuring the composition and dynamics of the cometary gas provides information concerning the interior composition of the body. Nevertheless, the bulk composition differs from the coma composition, and numerical models are required to simulate the main physical processes induced by the illumination of the icy body.
Aims. The objectives of this study are to bring new constraints on the interior composition of the nucleus of comet 67P/Churyumov-Gerasimenko (hereafter 67P) by comparing the results of a thermophysical model applied to the nucleus of 67P and the coma measurements made by the Reflectron-type Time-Of-Flight (RTOF) mass spectrometer. This last is one of the three instruments of the Rosetta Orbiter Spectrometer for Ion and Neutral Analysis (ROSINA), used during the Rosetta mission.
Methods. Using a thermophysical model of the comet nucleus, we studied the evolution of the stratigraphy (position of the sublimation and crystallisation fronts), the temperature of the surface and subsurface, and the dynamics and spatial distribution of the volatiles (H2O, CO2 and CO). We compared them with the in situ measurements from ROSINA/RTOF and an inverse coma model.
Results. We observed the evolution of the surface and near surface temperature, and the deepening of sublimation fronts. The thickness of the dust layer covering the surface strongly influences the H2O outgassing but not the more volatiles species. The CO outgassing is highly sensitive to the initial CO/H2O ratio, as well as to the presence of trapped CO in the amorphous ice.
Conclusions. The study of the influence of the initial parameters on the computed volatile fluxes and the comparison with ROSINA/RTOF measurements provide a range of values for an initial dust mantle thickness and a range of values for the volatile ratio. These imply the presence of trapped CO. Nevertheless, further studies are required to reproduce the strong change of behaviour observed in RTOF measurements between September 2014 and February 2015.
The problem of determining the rate of rare events in dynamical systems is quite well-known but still difficult to solve. Recent attempts to overcome this problem exploit the fact that dynamic systems can be represented by a linear operator, such as the Koopman operator. Mathematically, the rare event problem comes down to the difficulty in finding invariant subspaces of these Koopman operators K. In this article, we describe a method to learn basis functions of invariant subspaces using an artificial neural Network.
Since the elimination algorithm of Fourier and Motzkin, many different methods have been developed for solving linear programs. When analyzing the time complexity of LP algorithms, it is typically either assumed that calculations are performed exactly and bounds are derived on the number of elementary arithmetic operations necessary, or the cost of all arithmetic operations is considered through a bit-complexity analysis. Yet in practice, implementations typically use limited-precision arithmetic. In this paper we introduce the idea of a limited-precision LP oracle and study how such an oracle could be used within a larger framework to compute exact precision solutions to LPs. Under mild assumptions, it is shown that a polynomial number of calls to such an oracle and a polynomial number of bit operations, is sufficient to compute an exact solution to an LP. This work provides a foundation for understanding and analyzing the behavior of the methods that are currently most effective in practice for solving LPs exactly.
Understanding the pathophysiological processes of cartilage degradation requires adequate model systems to develop therapeutic strategies towards osteoarthritis (OA). Although different in vitro or in vivo models have been described, further comprehensive approaches are needed to study specific disease aspects. This study aimed to combine in vitro and in silico modeling based on a tissue-engineering approach using mesenchymal condensation to mimic cytokine-induced cellular and matrix-related changes during cartilage degradation. Thus, scaffold-free cartilage-like constructs (SFCCs) were produced based on self-organization of mesenchymal stromal cells (mesenchymal condensation) and i) characterized regarding their cellular and matrix composition or secondly ii) treated with interleukin-1β (IL-1β) and tumor necrosis factor α (TNFα) for 3 weeks to simulate OA-related matrix degradation. In addition, an existing mathematical model based on partial differential equations was optimized and transferred to the underlying settings to simulate distribution of IL-1β, type II collagen degradation and cell number reduction. By combining in vitro and in silico methods, we aim to develop a valid, efficient alternative approach to examine and predict disease progression and effects of new therapeutics.
An advantageous property of mesh-based geometric morphometrics (GM) towards landmark-based approaches, is the possibility of precisely examining highly irregular shapes and highly topographic surfaces. In case of spherical-harmonics-based GM the main requirement is a completely closed mesh surface, which often is not given, especially when dealing with natural objects. Here we present a methodological workflow to prepare 3D segmentations containing large cavity openings for the conduction of spherical-harmonics-based GM. This will be exemplified with a case study on claws of hermit crabs (Paguroidea, Decapoda, Crustacea), whereby joint openings – between manus and “movable finger” – typify the large-cavity-opening problem. We found a methodology including an ambient-occlusion-based segmentation algorithm leading to results precise and suitable to study the inter- and intraspecific differences in shape of hermit crab claws. Statistical analyses showed a significant separation between all examined diogenid and pagurid claws, whereas the separation between all left and right claws did not show significance. Additionally, the procedure offers other benefits. It is easy to reproduce and creates sparse variance in the data, closures integrate smoothly into the total structures and the algorithm saves a significant amount of time.
Markov chain (MC) algorithms are ubiquitous in machine learning and statistics and many other disciplines. Typically, these algorithms can be formulated as acceptance rejection methods. In this work we present a novel estimator applicable to these methods, dubbed Markov chain importance sampling (MCIS), which efficiently makes use of rejected proposals. For the unadjusted Langevin algorithm, it provides a novel way of correcting the discretization error. Our estimator satisfies a central limit theorem and improves on error per CPU cycle, often to a large extent. As a by-product it enables estimating the normalizing constant, an important quantity in Bayesian machine learning and statistics.
Minimising levelised cost of electricity of bifacial solar panel arrays using Bayesian optimisation
(2020)
混合整数計画法 (Mixed Integer Programming: MIP) は,MIP を解くソフトウェアである MIP ソルバが大規模な現実問題を解けるようになったこともあり,現実問題を解く有用な OR の手法として広く知られるようになった.しかしながら,MIP ソルバの開発に欠かせないベンチマーク・データセットおよび性能測定方法についてはそれほど広く知られているとは言い難い.ベンチマーク・データセットは注意を払って作成しないと,多くのバイアスがかかってしまう.それらのバイアスを可能な限りのぞき,真に有用なベンチマーク・テストの結果を得るためには複数の人数で多大な労力を割く必要がある.本稿では,そのような MIP ソルバ開発の背景として重要な役割を果たしてきた MIPLIB と Hans Mittelmann’s benchmarks について解説する.また,本稿において Hans Mittelmann’s benchmarks は,BENCHMARKS FOR OPTIMIZATION SOFTWAREのページ (http://plato.asu.edu/bench.html) に示されているベンチマークである.
Urban transportation systems are subject to a high level of variation and fluctuation in demand over the day. When this variation and fluctuation are observed in both time and space, it is crucial to develop line plans that are responsive to demand. A multi-period line planning approach that considers a changing demand during the planning horizon is proposed. If such systems are also subject to limitations of resources, a dynamic transfer of resources from one line to another throughout the planning horizon should also be considered. A mathematical modelling framework is developed to solve the line planning problem with a cost-oriented approach considering transfer of resources during a finite length planning horizon of multiple periods. We use real-life public transportation network data for our computational results. We analyze whether or not multi-period solutions outperform single period solutions in terms of feasibility and relevant costs. The importance of demand variation on multi-period solutions is investigated. We evaluate the impact of resource transfer constraints on the effectiveness of solutions. We also study the effect of period lengths along with the problem parameters that are significant for and sensitive to the optimality of solutions.
Tom Streubel has observed that for functions in abs-normal form, generalized Taylor expansions of arbitrary order $\bar d-1$ can be generated by algorithmic piecewise differentiation. Abs-normal form means that the real or vector valued function is defined by an evaluation procedure that involves the absolute value function $|...|$ apart from arithmetic operations and $\bar d$ times continuously differentiable univariate intrinsic functions. The additive terms in Streubel's expansion are abs-polynomial, i.e. involve neither divisions nor intrinsics. When and where no absolute values occur, Moore's recurrences can be used to propagate univariate Taylor polynomials through the evaluation procedure with a computational effort of $\mathcal O({\bar d}^2)$, provided all univariate intrinsics are defined as solutions of linear ODEs. This regularity assumption holds for all standard intrinsics, but for irregular elementaries one has to resort to Faa di Bruno's formula, which has exponential complexity in $\bar d$. As already conjectured we show that the Moore recurrences can be adapted for regular intrinsics to the abs-normal case. Finally, we observe that where the intrinsics are real analytic the expansions can be extended to infinite series that converge absolutely on spherical domains.
Recently, Kronqvist et al. (J Global Optim 64(2):249–272, 2016) rediscovered the supporting hyperplane algorithm of Veinott (Oper Res 15(1):147–152, 1967) and demonstrated its computational benefits for solving convex mixed integer nonlinear programs. In this paper we derive the algorithm from a geometric point of view. This enables us to show that the supporting hyperplane algorithm is equivalent to Kelley’s cutting plane algorithm (J Soc Ind Appl Math 8(4):703–712, 1960) applied to a particular reformulation of the problem. As a result, we extend the applicability of the supporting hyperplane algorithm to convex problems represented by a class of general, not necessarily convex nor differentiable, functions.
We present an extension of Taylor's Theorem for the piecewise polynomial expansion of non-smooth evaluation procedures involving absolute value operations. Evaluation procedures are computer programs of mathematical functions in closed form expression and allow a different treatment of smooth operations or calls to the absolute value function. The well known classical Theorem of Taylor defines polynomial approximations of sufficiently smooth functions and is widely used for the derivation and analysis of numerical integrators for systems of ordinary differential- or differential-algebraic equations, for the construction of solvers for continuous non-linear optimization of finite dimensional objective functions and for root solving of non-linear systems of equations. The long term goal is the stabilization and acceleration of already known methods and the derivation of new methods by incorporating piecewise polynomial Taylor expansions. The herein provided proof of the higher order approximation quality of the new generalized expansions is constructive and allows efficiently designed algorithms for the execution and computation of the piecewise polynomial expansions. As a demonstration towards the ultimate goal we will derive a prototype of a {\$}{\$}k{\$}{\$}k-step method on the basis of polynomial interpolation and the proposed generalized expansions.
Price-and-verify: a new algorithm for recursive circle packing using Dantzig–Wolfe decomposition
(2020)
Packing rings into a minimum number of rectangles is an optimization problem which appears naturally in the logistics operations of the tube industry. It encompasses two major difficulties, namely the positioning of rings in rectangles and the recursive packing of rings into other rings. This problem is known as the Recursive Circle Packing Problem (RCPP). We present the first dedicated method for solving RCPP that provides strong dual bounds based on an exact Dantzig–Wolfe reformulation of a nonconvex mixed-integer nonlinear programming formulation. The key idea of this reformulation is to break symmetry on each recursion level by enumerating one-level packings, i.e., packings of circles into other circles, and by dynamically generating packings of circles into rectangles. We use column generation techniques to design a “price-and-verify” algorithm that solves this reformulation to global optimality. Extensive computational experiments on a large test set show that our method not only computes tight dual bounds, but often produces primal solutions better than those computed by heuristics from the literature.
The complexity in large-scale optimization can lie in both handling the objective function and handling the constraint set. In this respect, stochastic Frank-Wolfe algorithms occupy a unique position as they alleviate both computational burdens, by querying only approximate first-order information from the objective and by maintaining feasibility of the iterates without using projections. In this paper, we improve the quality of their first-order information by blending in adaptive gradients. We derive convergence rates and demonstrate the computational advantage of our method over the state-of-the-art stochastic Frank-Wolfe algorithms on both convex and nonconvex objectives. The experiments further show that our method can improve the performance of adaptive gradient algorithms for constrained optimization.
Though gait asymmetry is used as a metric of functional recovery in clinical rehabilitation, there is no consensus on an ideal method for its evaluation. Various methods have been proposed but are limited in scope, as they can often use only positive signals or discrete values extracted from time-scale data as input. By defining five symmetry axioms, a framework for benchmarking existing methods was established and a new method was described here for the first time: the weighted universal symmetry index (wUSI), which overcomes limitations of other methods. Both existing methods and the wUSI were mathematically compared to each other and in respect to their ability to fulfill the proposed symmetry axioms. Eligible methods that fulfilled these axioms were then applied using both discrete and continuous approaches to ground reaction force (GRF) data collected from healthy gait, both with and without artificially induced asymmetry using a single instrumented elbow crutch. The wUSI with a continuous approach was the only symmetry method capable of determining GRF asymmetries in different walking conditions in all three planes of motion. When used with a continuous approach, the wUSI method was able to detect asymmetries while avoiding artificial inflation, a common problem reported in other methods. In conclusion, the wUSI is proposed as a universal method to quantify three-dimensional GRF asymmetries, which may also be expanded to other biomechanical signals.
Quantitative PA tomography of high resolution 3-D images: experimental validation in tissue phantoms
(2020)
Quantitative photoacoustic tomography aims recover the spatial distribution of absolute chromophore concentrations and their ratios from deep tissue, high-resolution images. In this study, a model-based inversion scheme based on a Monte-Carlo light transport model is experimentally validated on 3-D multispectral images of a tissue phantom acquired using an all-optical scanner with a planar detection geometry. A calibrated absorber allowed scaling of the measured data during the inversion, while an acoustic correction method was employed to compensate the effects of limited view detection. Chromophore- and fluence-dependent step sizes and Adam optimization were implemented to achieve rapid convergence. High resolution 3-D maps of absolute concentrations and their ratios were recovered with high accuracy. Potential applications of this method include quantitative functional and molecular photoacoustic tomography of deep tissue in preclinical and clinical studies.
Friction in liquids arises from conservative forces between molecules and atoms. Although the hydrodynamics at the nanoscale is subject of intense research and despite the enormous interest in the non-Markovian dynamics of single molecules and solutes, the onset of friction from the atomistic scale so far could not be demonstrated. Here, we fill this gap based on frequency-resolved friction data from high-precision simulations of three prototypical liquids, including water. Combining with theory, we show that friction in liquids emerges abruptly at a characteristic frequency, beyond which viscous liquids appear as non-dissipative, elastic solids. Concomitantly, the molecules experience Brownian forces that display persistent correlations. A critical test of the generalised Stokes–Einstein relation, mapping the friction of single molecules to the visco-elastic response of the macroscopic sample, disproves the relation for Newtonian fluids, but substantiates it exemplarily for water and a moderately supercooled liquid. The employed approach is suitable to yield insights into vitrification mechanisms and the intriguing mechanical properties of soft materials.
Constrained second-order convex optimization algorithms are the method of choice when a high accuracy solution to a problem is needed, due to their local quadratic convergence. These algorithms require the solution of a constrained quadratic subproblem at every iteration. We present the \emph{Second-Order Conditional Gradient Sliding} (SOCGS) algorithm, which uses a projection-free algorithm to solve the constrained quadratic subproblems inexactly. When the feasible region is a polytope the algorithm converges quadratically in primal gap after a finite number of linearly convergent iterations. Once in the quadratic regime the SOCGS algorithm requires O(log(log1/ε)) first-order and Hessian oracle calls and O(log(1/ε)log(log1/ε)) linear minimization oracle calls to achieve an ε-optimal solution. This algorithm is useful when the feasible region can only be accessed efficiently through a linear optimization oracle, and computing first-order information of the function, although possible, is costly.
We present a software-assisted workflow for the alignment and matching of filamentous structures across a 3D stack of serial images. This is achieved by combining automatic methods, visual validation, and interactive correction. After an initial alignment, the user can continuously improve the result by interactively correcting landmarks or matches of filaments. Supported by a visual quality assessment of regions that have been already inspected, this allows a trade-off between quality and manual labor. The software tool was developed to investigate cell division by quantitative 3D analysis of microtubules (MTs) in both mitotic and meiotic spindles. For this, each spindle is cut into a series of semi-thick physical sections, of which electron tomograms are acquired. The serial tomograms are then stitched and non-rigidly aligned to allow tracing and connecting of MTs across tomogram boundaries. In practice, automatic stitching alone provides only an incomplete solution, because large physical distortions and a low signal-to-noise ratio often cause experimental difficulties. To derive 3D models of spindles despite the problems related to sample preparation and subsequent data collection, semi-automatic validation and correction is required to remove stitching mistakes. However, due to the large number of MTs in spindles (up to 30k) and their resulting dense spatial arrangement, a naive inspection of each MT is too time consuming. Furthermore, an interactive visualization of the full image stack is hampered by the size of the data (up to 100 GB). Here, we present a specialized, interactive, semi-automatic solution that considers all requirements for large-scale stitching of filamentous structures in serial-section image stacks. The key to our solution is a careful design of the visualization and interaction tools for each processing step to guarantee real-time response, and an optimized workflow that efficiently guides the user through datasets.
Cycle inequalities play an important role in the polyhedral study of the periodic timetabling problem in public transport. We give the first pseudo-polynomial time separation algorithm for cycle inequalities, and we contribute a rigorous proof for the pseudo-polynomial time separability of the change-cycle inequalities. Moreover, we provide several NP-completeness results, indicating that pseudo-polynomial time is best possible. The efficiency of these cutting planes is demonstrated on real-world instances of the periodic timetabling problem.