Refine
Year of publication
- 2020 (224) (remove)
Document Type
- Article (122)
- In Proceedings (35)
- ZIB-Report (30)
- Master's Thesis (9)
- Book chapter (7)
- Book (4)
- Doctoral Thesis (4)
- Other (4)
- In Collection (3)
- Bachelor's Thesis (2)
- Research data (2)
- Proceedings (1)
- Software (1)
Keywords
Institute
- Modeling and Simulation of Complex Processes (71)
- Numerical Mathematics (67)
- Visual and Data-centric Computing (37)
- Visual Data Analysis (36)
- Applied Algorithmic Intelligence Methods (33)
- Computational Nano Optics (32)
- Mathematical Optimization (32)
- Mathematical Algorithmic Intelligence (30)
- Distributed Algorithms and Supercomputing (27)
- AI in Society, Science, and Technology (19)
In linear optimization, matrix structure can often be exploited algorithmically. However, beneficial presolving reductions sometimes destroy the special structure of a given problem. In this article, we discuss structure-aware implementations of presolving as part of a parallel interior-point method to solve linear programs with block-diagonal structure, including both linking variables and linking constraints. While presolving reductions are often mathematically simple, their implementation in a high-performance computing environment is a complex endeavor. We report results on impact, performance, and scalability of the resulting presolving routines on real-world energy system models with up to 700 million nonzero entries in the constraint matrix.
Cycle inequalities play an important role in the polyhedral study of the periodic timetabling problem in public transport. We give the first pseudo-polynomial time separation algorithm for cycle inequalities, and we contribute a rigorous proof for the pseudo-polynomial time separability of the change-cycle inequalities. Moreover, we provide several NP-completeness results, indicating that pseudo-polynomial time is best possible. The efficiency of these cutting planes is demonstrated on real-world instances of the periodic timetabling problem.
We introduce a concurrent solver for the periodic event scheduling problem (PESP). It combines mixed integer programming techniques, the modulo network simplex method, satisfiability approaches, and a new heuristic based on maximum cuts. Running these components in parallel speeds up the overall solution process. This enables us to significantly improve the current upper and lower bounds for all benchmark instances of the library PESPlib.
Understanding the pathophysiological processes of cartilage degradation requires adequate model systems to develop therapeutic strategies towards osteoarthritis (OA). Although different in vitro or in vivo models have been described, further comprehensive approaches are needed to study specific disease aspects. This study aimed to combine in vitro and in silico modeling based on a tissue-engineering approach using mesenchymal condensation to mimic cytokine-induced cellular and matrix-related changes during cartilage degradation. Thus, scaffold-free cartilage-like constructs (SFCCs) were produced based on self-organization of mesenchymal stromal cells (mesenchymal condensation) and i) characterized regarding their cellular and matrix composition or secondly ii) treated with interleukin-1β (IL-1β) and tumor necrosis factor α (TNFα) for 3 weeks to simulate OA-related matrix degradation. In addition, an existing mathematical model based on partial differential equations was optimized and transferred to the underlying settings to simulate distribution of IL-1β, type II collagen degradation and cell number reduction. By combining in vitro and in silico methods, we aim to develop a valid, efficient alternative approach to examine and predict disease progression and effects of new therapeutics.
Recently, Kronqvist et al. (J Global Optim 64(2):249–272, 2016) rediscovered the supporting hyperplane algorithm of Veinott (Oper Res 15(1):147–152, 1967) and demonstrated its computational benefits for solving convex mixed integer nonlinear programs. In this paper we derive the algorithm from a geometric point of view. This enables us to show that the supporting hyperplane algorithm is equivalent to Kelley’s cutting plane algorithm (J Soc Ind Appl Math 8(4):703–712, 1960) applied to a particular reformulation of the problem. As a result, we extend the applicability of the supporting hyperplane algorithm to convex problems represented by a class of general, not necessarily convex nor differentiable, functions.
Recently, Intel released the oneAPI programming environment. With Data Parallel C++ (DPC++), oneAPI enables codes to target multiple hardware architectures like multi-core CPUs, GPUs, and even FPGAs or other hardware using a single source. For legacy codes that were written for Nvidia GPUs, a compatibility tool is provided which facilitates the transition to the SYCL-based DPC++ programming language. This paper presents early experiences when using both the compatibility tool and oneAPI as well the employed extension to the SYCL programming standard for the tsunami simulation code easyWave. A performance study compares the original code running on Xeon processors using OpenMP as well as CUDA with the performance of the DPC++ counter part on multicore CPUs as well as integrated GPUs.
The images of D’Arcy Wentworth Thompson’s book “On Growth and Form” got an iconic status and became influential for biometrics and other mathematical approaches to organismic form. In particular, this is true for those of the chapter on the theory of transformation, which even has an impact on art and humanities. Based on his approach, Thompson formulated far-reaching conclusions with a partly anti-Darwinian stance. Here, we use the example of Thompson’s transformation of crab carapaces to test to what degree the transformation of grids, landmarks, and shapes result in congruent images. For comparison, we applied the same series of tests to digitized carapaces of real crabs. Both approaches show similar results. Only the simple transformations show a reasonable form of congruence. In particular, the transformations to majoid spider crabs reveal a complicated transformation of grids with partly crossing lines. By contrast, the carapace of the lithodid species is relatively easily created despite the fact that it is no brachyuran, but evolved a spider crab-like shape convergently from a hermit crab ancestor.
This paper investigates the estimation of the size of Branch-and-Bound (B&B) trees for solving mixed-integer programs. We first prove that the size of the B&B tree cannot be approximated within a factor of~2 for general binary programs, unless P equals NP. Second, we review measures of the progress of the B&B search, such as the gap, and propose a new measure, which we call leaf frequency.
We study two simple ways to transform these progress measures into B&B tree size estimates, either as a direct projection, or via double-exponential smoothing, a standard time-series forecasting technique. We then combine different progress measures and their trends into nontrivial estimates using Machine Learning techniques, which yields more precise estimates than any individual measure. The best method we have identified uses all individual measures as features of a random forest model.
In a large computational study, we train and validate all methods on the publicly available MIPLIB and Coral general purpose benchmark sets. On average, the best method estimates B&B tree sizes within a factor of 3 on the set of unseen test instances even during the early stage of the search, and improves in accuracy as the search progresses. It also achieves a factor 2 over the entire search on each out of six additional sets of homogeneous instances we have tested. All techniques are available in version 7 of the branch-and-cut framework SCIP.
A prerequisite for many analysis tasks in modern comparative biology is the segmentation of 3-dimensional (3D) images of the specimens being investigated (e.g. from microCT data). Depending on the specific imaging technique that was used to acquire the images and on the image resolution, different segmentation tools will be required. While some standard tools exist that can often be applied for specific subtasks, building whole processing pipelines solely from standard tools is often difficult. Some tasks may even necessitate the implementation of manual interaction tools to achieve a quality that is sufficient for the subsequent analysis. In this work, we present a pipeline of segmentation tools that can be used for the semi-automatic segmentation and quantitative analysis of voids in tissue (i.e. internal structural porosity). We use this pipeline to analyze lacuno-canalicular networks in stingray tesserae from 3D images acquired with synchrotron microCT.
* The first step of this processing pipeline, the segmentation of the tesserae, was performed using standard marker-based watershed segmentation. The efficient processing of the next two steps, that is, the segmentation of all lacunae spaces belonging to a specific tessera and the separation of these spaces into individual lacunae required modern, recently developed tools.
* For proofreading, we developed a graph-based interactive method that allowed us to quickly split lacunae that were accidentally merged, and to merge lacunae that were wrongly split.
* Finally, the tesserae and their corresponding lacunae were subdivided into anatomical regions of interest (structural wedges) using a semi- manual approach.
A prerequisite for many analysis tasks in modern comparative biology is the segmentation of 3-dimensional (3D) images of the specimens being investigated (e.g. from microCT data). Depending on the specific imaging technique that was used to acquire the images and on the image resolution, different segmentation tools will be required. While some standard tools exist that can often be applied for specific subtasks, building whole processing pipelines solely from standard tools is often difficult. Some tasks may even necessitate the implementation of manual interaction tools to achieve a quality that is sufficient for the subsequent analysis. In this work, we present a pipeline of segmentation tools that can be used for the semi-automatic segmentation and quantitative analysis of voids in tissue (i.e. internal structural porosity). We use this pipeline to analyze lacuno-canalicular networks in stingray tesserae from 3D images acquired with synchrotron microCT.
* The first step of this processing pipeline, the segmentation of the tesserae, was performed using standard marker-based watershed segmentation. The efficient processing of the next two steps, that is, the segmentation of all lacunae spaces belonging to a specific tessera and the separation of these spaces into individual lacunae required modern, recently developed tools.
* For proofreading, we developed a graph-based interactive method that allowed us to quickly split lacunae that were accidentally merged, and to merge lacunae that were wrongly split.
* Finally, the tesserae and their corresponding lacunae were subdivided into anatomical regions of interest (structural wedges) using a semi- manual approach.
Project plan4res (www.plan4res.eu) involves the development of a modular framework for the modeling and analysis of energy system strategies at the European level. It will include models describing the investment and operation decisions for a wide variety of technologies related to electricity and non-electricity energy sectors across generation, consumption, transmission and distribution. The modularity of the framework allows for detailed modelling of major areas of energy systems that can help stakeholders from different backgrounds to focus on specific topics related to the energy landscape in Europe and to receive relevant outputs and insights tailored to their needs. The current paper presents a qualitative description of key concepts and methods of the novel modular optimization framework and provides insights into the corresponding energy landscape.
Surgical tool segmentation in endoscopic videos is an important component of computer assisted interventions systems. Recent success of image-based solutions using fully-supervised deep learning approaches can be attributed to the collection of big labeled datasets. However, the annotation of a big dataset of real videos can be prohibitively expensive and time consuming. Computer simulations could alleviate the manual labeling problem, however, models trained on simulated data do not generalize to real data. This work proposes a consistency-based framework for joint learning of simulated and real (unlabeled) endoscopic data to bridge this performance generalization issue. Empirical results on two data sets (15 videos of the Cholec80 and EndoVis'15 dataset) highlight the effectiveness of the proposed Endo-Sim2Real method for instrument segmentation. We compare the segmentation of the proposed approach with state-of-the-art solutions and show that our method improves segmentation both in terms of quality and quantity.
Automatic recognition of surgical phases is an important component for developing an intra-operative context-aware system. Prior work in this area focuses on recognizing short-term tool usage patterns within surgical phases. However, the difference between intra- and inter-phase tool usage patterns has not been investigated for automatic phase recognition. We developed a Recurrent Neural Network (RNN), in particular a state-preserving Long Short Term Memory (LSTM) architecture to utilize the long-term evolution of tool usage within complete surgical procedures. For fully automatic tool presence detection from surgical video frames, a Convolutional Neural Network (CNN) based architecture namely ZIBNet is employed. Our proposed approach outperformed EndoNet by 8.1% on overall precision for phase detection tasks and 12.5% on meanAP for tool recognition tasks.
Motivation: The ever-rising volume of patients, high maintenance cost of operating rooms and time consuming analysis of surgical skills are fundamental problems that hamper the practical training of the next generation of surgeons. The hospitals prefer to keep the surgeons busy in real operations over training young surgeons for obvious economic reasons. One fundamental need in surgical training is the reduction of the time needed by the senior surgeon to review the endoscopic procedures performed by the young surgeon while minimizing the subjective bias in evaluation. The unprecedented performance of deep learning ushers the new age of data-driven automatic analysis of surgical skills.
Method: Deep learning is capable of efficiently analyzing thousands of hours of laparoscopic video footage to provide an objective assessment of surgical skills. However, the traditional end-to-end setting of deep learning (video in, skill assessment out) is not explainable. Our strategy is to utilize the surgical process modeling framework to divide the surgical process into understandable components. This provides the opportunity to employ deep learning for superior yet automatic detection and evaluation of several aspects of laparoscopic cholecystectomy such as surgical tool and phase detection.
We employ ZIBNet for the detection of surgical tool presence. ZIBNet employs pre-processing based on tool usage imbalance, a transfer learned 50-layer residual network (ResNet-50) and temporal smoothing. To encode the temporal evolution of tool usage (over the entire video sequence) that relates to the surgical phases, Long Short Term Memory (LSTM) units are employed with long-term dependency.
Dataset: We used CHOLEC 80 dataset that consists of 80 videos of laparoscopic cholecystectomy performed by 13 surgeons, divided equally for training and testing. In these videos, up to three different tools (among 7 types of tools) can be present in a frame.
Results: The mean average precision of the detection of all tools is 93.5 ranging between 86.8 and 99.3, a significant improvement (p <0.01) over the previous state-of-the-art. We observed that less frequent tools like Scissors, Irrigator, Specimen Bag etc. are more related to phase transitions. The overall precision (recall) of the detection of all surgical phases is 79.6 (81.3).
Conclusion: While this is not the end goal for surgical skill analysis, the development of such a technological platform is essential toward a data-driven objective understanding of surgical skills. In future, we plan to investigate surgeon-in-the-loop analysis and feedback for surgical skill analysis.
We propose generalizations of the T²-statistics of Hotelling and the Bhattacharayya distance for data taking values in Lie groups.
A key feature of the derived measures is that they are compatible with the group structure even for manifolds that do not admit any bi-invariant metric.
This property, e.g., assures analysis that does not depend on the reference shape, thus, preventing bias due to arbitrary choices thereof.
Furthermore, the generalizations agree with the common definitions for the special case of flat vector spaces guaranteeing consistency.
Employing a permutation test setup, we further obtain nonparametric, two-sample testing procedures that themselves are bi-invariant and consistent.
We validate our method in group tests revealing significant differences in hippocampal shape between individuals with mild cognitive impairment and normal controls.
The German high-pressure natural gas transport network consists of thousands of interconnected elements spread over more than 120,000 km of pipelines built during the last 100 years. During the last decade, we have spent many person-years to extract consistent data out of the available sources, both public and private. Based on two case studies, we present some of the challenges we encountered.
Preparing consistent, high-quality data is surprisingly hard, and the effort necessary can hardly be overestimated. Thus, it is particularly important to decide which strategy regarding data curation to adopt. Which precision of the data is necessary? When is it more efficient to work with data that is just sufficiently correct on average?
In the case studies we describe our experiences and the strategies we adopted to deal with the obstacles and to minimize future effort.
Finally, we would like to emphasize that well-compiled data sets, publicly available for research purposes, provide the grounds for building innovative algorithmic solutions to the challenges of the future.
Conflicting hypotheses about the relationships among the major lineages of aculeate Hymenoptera clearly show the necessity of detailed comparative morphological studies. Using micro-computed tomography and 3D reconstructions, the skeletal musculature of the meso- and metathorax and the first and second abdominal segment in Apoidea are described. Females of Sceliphron destillatorium, Sphex (Fernaldina) lucae (both Sphecidae), and Ampulex compressa (Ampulicidae) were examined. The morphological terminology provided by the Hymenoptera Anatomy Ontology is used. Up to 42 muscles were found. The three species differ in certain numerical and structural aspects. Ampulicidae differs significantly from Sphecidae in the metathorax and the anterior abdomen. The metapleural apodeme and paracoxal ridge are weakly developed in Ampulicidae, which affect some muscular structures. Furthermore, the muscles that insert on the coxae and trochanters are broader and longer in Ampulicidae. A conspicuous characteristic of Sphecidae is the absence of the metaphragma. Overall, we identified four hitherto unrecognized muscles. Our work suggests additional investigations on structures discussed in this paper.
Context. Cometary outgassing is induced by the sublimation of ices and the ejection of dust originating from the nucleus. Therefore measuring the composition and dynamics of the cometary gas provides information concerning the interior composition of the body. Nevertheless, the bulk composition differs from the coma composition, and numerical models are required to simulate the main physical processes induced by the illumination of the icy body.
Aims. The objectives of this study are to bring new constraints on the interior composition of the nucleus of comet 67P/Churyumov-Gerasimenko (hereafter 67P) by comparing the results of a thermophysical model applied to the nucleus of 67P and the coma measurements made by the Reflectron-type Time-Of-Flight (RTOF) mass spectrometer. This last is one of the three instruments of the Rosetta Orbiter Spectrometer for Ion and Neutral Analysis (ROSINA), used during the Rosetta mission.
Methods. Using a thermophysical model of the comet nucleus, we studied the evolution of the stratigraphy (position of the sublimation and crystallisation fronts), the temperature of the surface and subsurface, and the dynamics and spatial distribution of the volatiles (H2O, CO2 and CO). We compared them with the in situ measurements from ROSINA/RTOF and an inverse coma model.
Results. We observed the evolution of the surface and near surface temperature, and the deepening of sublimation fronts. The thickness of the dust layer covering the surface strongly influences the H2O outgassing but not the more volatiles species. The CO outgassing is highly sensitive to the initial CO/H2O ratio, as well as to the presence of trapped CO in the amorphous ice.
Conclusions. The study of the influence of the initial parameters on the computed volatile fluxes and the comparison with ROSINA/RTOF measurements provide a range of values for an initial dust mantle thickness and a range of values for the volatile ratio. These imply the presence of trapped CO. Nevertheless, further studies are required to reproduce the strong change of behaviour observed in RTOF measurements between September 2014 and February 2015.
Intrinsic and parametric regression models are of high interest for the statistical analysis of manifold-valued data such as images and shapes. The standard linear ansatz has been generalized to geodesic regression on manifolds making it possible to analyze dependencies of random variables that spread along generalized straight lines. Nevertheless, in some scenarios, the evolution of the data cannot be modeled adequately by a geodesic.
We present a framework for nonlinear regression on manifolds by considering Riemannian splines, whose segments are Bézier curves, as trajectories.
Unlike variational formulations that require time-discretization, we take a constructive approach that provides efficient and exact evaluation by virtue of the generalized de Casteljau algorithm.
We validate our method in experiments on the reconstruction of periodic motion of the mitral valve as well as the analysis of femoral shape changes during the course of osteoarthritis, endorsing Bézier spline regression as an effective and flexible tool for manifold-valued regression.
Research software has become a central asset in academic research. It optimizes existing and enables new research methods, implements and embeds research knowledge, and constitutes an essential research product in itself. Research software must be sustainable in order to understand, replicate, reproduce, and build upon existing research or conduct new research effectively. In other words, software must be available, discoverable, usable, and adaptable to new needs, both now and in the future. Research software therefore requires an environment that supports sustainability. Hence, a change is needed in the way research software development and maintenance are currently motivated, incentivized, funded, structurally and infrastructurally supported, and legally treated. Failing to do so will threaten the quality and validity of research. In this paper, we identify challenges for research software sustainability in Germany and beyond, in terms of motivation, selection, research software engineering personnel, funding, infrastructure, and legal aspects. Besides researchers, we specifically address political and academic decision-makers to increase awareness of the importance and needs of sustainable research software practices. In particular, we recommend strategies and measures to create an environment for sustainable research software, with the ultimate goal to ensure that software-driven research is valid, reproducible and sustainable, and that software is recognized as a first class citizen in research. This paper is the outcome of two workshops run in Germany in 2019, at deRSE19 - the first International Conference of Research Software Engineers in Germany - and a dedicated DFG-supported follow-up workshop in Berlin.
Friction in liquids arises from conservative forces between molecules and atoms. Although the hydrodynamics at the nanoscale is subject of intense research and despite the enormous interest in the non-Markovian dynamics of single molecules and solutes, the onset of friction from the atomistic scale so far could not be demonstrated. Here, we fill this gap based on frequency-resolved friction data from high-precision simulations of three prototypical liquids, including water. Combining with theory, we show that friction in liquids emerges abruptly at a characteristic frequency, beyond which viscous liquids appear as non-dissipative, elastic solids. Concomitantly, the molecules experience Brownian forces that display persistent correlations. A critical test of the generalised Stokes–Einstein relation, mapping the friction of single molecules to the visco-elastic response of the macroscopic sample, disproves the relation for Newtonian fluids, but substantiates it exemplarily for water and a moderately supercooled liquid. The employed approach is suitable to yield insights into vitrification mechanisms and the intriguing mechanical properties of soft materials.
An advantageous property of mesh-based geometric morphometrics (GM) towards landmark-based approaches, is the possibility of precisely examining highly irregular shapes and highly topographic surfaces. In case of spherical-harmonics-based GM the main requirement is a completely closed mesh surface, which often is not given, especially when dealing with natural objects. Here we present a methodological workflow to prepare 3D segmentations containing large cavity openings for the conduction of spherical-harmonics-based GM. This will be exemplified with a case study on claws of hermit crabs (Paguroidea, Decapoda, Crustacea), whereby joint openings – between manus and “movable finger” – typify the large-cavity-opening problem. We found a methodology including an ambient-occlusion-based segmentation algorithm leading to results precise and suitable to study the inter- and intraspecific differences in shape of hermit crab claws. Statistical analyses showed a significant separation between all examined diogenid and pagurid claws, whereas the separation between all left and right claws did not show significance. Additionally, the procedure offers other benefits. It is easy to reproduce and creates sparse variance in the data, closures integrate smoothly into the total structures and the algorithm saves a significant amount of time.
This master thesis investigates the use and behaviour of a mixed finite element formulation for the simulation of garments.
The garment is modelled as an isotropic shell and is related to its mid-surface by energetic degeneration. Based on this, an energy functional is constructed, which contains the deformation and the mid-surface vector as degree of freedom. It is then shown why this problem does not correspond to a saddle point problem, but to a non-convex energy minimization.
The implementation of the energy minimization takes place with the ZIB-internal FE framework Kaskade7.4, whereby a geometric linear and different geometric non-linear problems are examined, whereby for a selected, non-linear example a comparison is made with an existing implementation on basis of Morley elements.
The further evaluations include the analysis of the quantitative and qualitative results, the used solution method, the behaviour of the system energy as well as the used CPU time.
This thesis presents a method for interpolating data using a neural network. The data is sparse and perturbed and is used as training data for a small neural network. For severely perturbed data, the network does not manage to find a smooth interpolation. But as the data resembles the solution to the one-dimensional and time-independent heat equation, the weak form of this PDE and subsequently its functional can be written down. If the functional is minimized, a solution to the weak form of the heat equation is found. The functional is now added to the traditional loss function of a neural network, the mean squared error between the network prediction and the given data, in order to smooth out fluctuations and interpolate between distanced grid points. This way, the network minimizes both the mean squared error and the functional, resulting in a smoother curve that can be used to predict u(x) for any grid point x.
An advantageous property of mesh-based geometric morphometrics (GM) towards landmark-based approaches, is the possibility of precisely examining highly irregular shapes and highly topographic surfaces. In case of spherical-harmonics-based GM the main requirement is a completely closed mesh surface, which often is not given, especially when dealing with natural objects. Here we present a methodological workflow to prepare 3D segmentations containing large cavity openings for the conduction of spherical-harmonics-based GM. This will be exemplified with a case study on claws of hermit crabs (Paguroidea, Decapoda, Crustacea), whereby joint openings – between manus and “movable finger” – typify the large-cavity-opening problem. We found a methodology including an ambient-occlusion-based segmentation algorithm leading to results precise and suitable to study the inter- and intraspecific differences in shape of hermit crab claws. Statistical analyses showed a significant separation between all examined diogenid and pagurid claws, whereas the separation between all left and right claws did not show significance. Additionally, the procedure offers other benefits. It is easy to reproduce and creates sparse variance in the data, closures integrate smoothly into the total structures and the algorithm saves a significant amount of time.
Since the elimination algorithm of Fourier and Motzkin, many different methods have been developed for solving linear programs. When analyzing the time complexity of LP algorithms, it is typically either assumed that calculations are performed exactly and bounds are derived on the number of elementary arithmetic operations necessary, or the cost of all arithmetic operations is considered through a bit-complexity analysis. Yet in practice, implementations typically use limited-precision arithmetic. In this paper we introduce the idea of a limited-precision LP oracle and study how such an oracle could be used within a larger framework to compute exact precision solutions to LPs. Under mild assumptions, it is shown that a polynomial number of calls to such an oracle and a polynomial number of bit operations, is sufficient to compute an exact solution to an LP. This work provides a foundation for understanding and analyzing the behavior of the methods that are currently most effective in practice for solving LPs exactly.
Quantitative PA tomography of high resolution 3-D images: experimental validation in tissue phantoms
(2020)
Quantitative photoacoustic tomography aims recover the spatial distribution of absolute chromophore concentrations and their ratios from deep tissue, high-resolution images. In this study, a model-based inversion scheme based on a Monte-Carlo light transport model is experimentally validated on 3-D multispectral images of a tissue phantom acquired using an all-optical scanner with a planar detection geometry. A calibrated absorber allowed scaling of the measured data during the inversion, while an acoustic correction method was employed to compensate the effects of limited view detection. Chromophore- and fluence-dependent step sizes and Adam optimization were implemented to achieve rapid convergence. High resolution 3-D maps of absolute concentrations and their ratios were recovered with high accuracy. Potential applications of this method include quantitative functional and molecular photoacoustic tomography of deep tissue in preclinical and clinical studies.
Two-dimensional electronic spectra (2DES) provide unique ways to track the energy transfer dynamics in light-harvesting complexes. The interpretation of the peaks and structures found in experimentally recorded 2DES is often not straightforward, since several processes are imaged simultaneously. The choice of specific pulse polarization sequences helps to disentangle the sometimes convoluted spectra, but brings along other disturbances. We show by detailed theoretical calculations how 2DES of the Fenna-Matthews-Olson complex are affected by rotational and conformational disorder of the chromophores.
Transient capture of electrons in magnetic fields, or: comets in the restricted three-body problem
(2020)
The motion of celestial bodies in astronomy is closely related to the orbits
of electrons encircling an atomic nucleus. Bohr and Sommerfeld presented a
quantization scheme of the classical orbits to analyze the eigenstates of the
hydrogen atom. Here we discuss another close connection of classical
trajectories and quantum mechanical states: the transient dynamics of objects
around a nucleus. In this setup a comet (or an electron) is trapped for a while
in the vicinity of parent object (Jupiter or an atomic nucleus), but eventually
escapes after many revolutions around the center of attraction.
For providing railway services the company’s railway rolling stock is one if not the most important ingredient. It decides about the number of passenger or cargo trips the company can offer, about the quality a passenger experiences the train ride
and it is often related to the image of the company itself. Thus, it is highly desired to
have the available rolling stock in the best shape possible. Moreover, in many countries, as Germany where our industrial partner DB Fernverkehr AG (DBF) is located, laws enforce regular vehicle inspections to ensure the safety of the passengers. This leads to rolling stock optimization problems with complex rules for vehicle maintenance. This problem is well studied in the literature for example see Maroti and Kroon 2005, or Cordeau et. al. 2001 for applications including vehicle maintenance. The contribution of this paper is a new algorithmic approach to solve the Rolling Stock Rotation Problem for the ICE high speed train fleet of DBF with included vehicle maintenance. It is based on a relaxation of a mixed integer
linear programming model with an iterative cut generation to enforce the feasibility of a solution of the relaxation in the solution space of the original problem. The resulting mixed integer linear programming model is based on a hypergraph approach presented in Borndörfer et. al. 2015. The new approach is tested on real world instances modeling different scenarios
for the ICE high speed train network in Germany and compared to the approaches
of Reuther 2017 that are in operation at DB Fernverkehr AG. The approach shows a significant reduction of the run time to produce solutions with comparable or even better objective function values.
We analytically determine Jacobi fields and parallel transports and compute geodesic regression in Kendall’s shape space. Using the derived expressions,
we can fully leverage the geometry via Riemannian optimization and thereby reduce the computational expense by several orders of magnitude over common,
nonlinear constrained approaches. The methodology is demonstrated by performing a longitudinal statistical analysis of epidemiological shape data. As an example
application we have chosen 3D shapes of knee bones, reconstructed from image
data of the Osteoarthritis Initiative (OAI). Comparing subject groups with incident and developing osteoarthritis versus normal controls, we find clear differences in the temporal development of femur shapes. This paves the way for early prediction of incident knee osteoarthritis, using geometry data alone.
Conflict-driven Pseudo-Boolean (PB) solvers optimize 0-1 integer linear programs by extending the conflict-driven clause learning (CDCL) paradigm from SAT solving. Though PB solvers have the potential to be exponentially more efficient than CDCL solvers in theory, in practice they can sometimes get hopelessly stuck even when the linear program (LP) relaxation is infeasible over the reals. Inspired by mixed integer programming (MIP), we address this problem by interleaving incremental LP solving with cut generation within the conflict-driven PB search. This hybrid approach, which for the first time combines MIP techniques with full-blown conflict analysis over linear inequalities using the cutting planes method, significantly improves performance on a wide range of benchmarks, approaching a "best of two worlds" scenario between SAT-style conflict-driven search and MIP-style branch-and-cut.
The most important ingredient for solving mixed-integer nonlinear programs (MINLPs) to global epsilon-optimality with spatial branch and bound is a tight, computationally tractable relaxation. Due to both theoretical and practical considerations, relaxations of MINLPs are usually required to be convex. Nonetheless, current optimization solver can often successfully handle a moderate presence of nonconvexities, which opens the door for the use of potentially tighter nonconvex relaxations. In this work, we exploit this fact and make use of a nonconvex relaxation obtained via aggregation of constraints: a surrogate relaxation. These relaxations were actively studied for linear integer programs in the 70s and 80s, but they have been scarcely considered since. We revisit these relaxations in an MINLP setting and show the computational benefits and challenges they can have. Additionally, we study a generalization of such relaxation that allows for multiple aggregations simultaneously and present the first algorithm that is capable of computing the best set of aggregations. We propose a multitude of computational enhancements for improving its practical performance and evaluate the algorithm’s ability to generate strong dual bounds through extensive computational experiments.
We present time-space trade-offs for computing the Euclidean minimum spanning tree of a set S of n point-sites in the plane. More precisely, we assume that S resides in a random-access memory that can only be read. The edges of the Euclidean minimum spanning tree EMST(S) have to be reported sequentially, and they cannot be accessed or modified afterwards. There is a parameter s in {1, ..., n} so that the algorithm may use O(s) cells of read-write memory (called the workspace) for its computations. Our goal is to find an algorithm that has the best possible running time for any given s between 1 and n.
We show how to compute EMST(S) in O(((n^3)/(s^2)) log s) time with O(s) cells of workspace, giving a smooth trade-off between the two best-known bounds O(n^3) for s = 1 and O(n log n) for s = n. For this, we run Kruskal's algorithm on the "relative neighborhood graph" (RNG) of S. It is a classic fact that the minimum spanning tree of RNG(S) is exactly EMST(S). To implement Kruskal's algorithm with O(s)
cells of workspace, we define s-nets, a compact representation of planar graphs. This allows us to efficiently maintain and update the components of the current minimum spanning forest as the edges are being inserted.
We consider the problem of verifying linear properties of neural networks. Despite their success in many classification and prediction tasks, neural networks may return unexpected results for certain inputs. This is highly problematic with respect to the application of neural networks for safety-critical tasks, e.g. in autonomous driving. We provide an overview of algorithmic approaches that aim to provide formal guarantees on the behavior of neural networks. Moreover, we present new theoretical results with respect to the approximation of ReLU neural networks. On the other hand, we implement a solver for verification of ReLU neural networks which combines mixed integer programming (MIP) with specialized branching and approximation techniques. To evaluate its performance, we conduct an extensive computational study. For that we use test instances based on the ACAS Xu System and the MNIST handwritten digit data set. Our solver is publicly available and able to solve the verification problem for instances which do not have independent bounds for each input neuron.
We propose in this paper the Dynamic Multiobjective Shortest Problem. It features multidimensional states that can depend on several variables and not only on time; this setting is motivated by flight planning and electric vehicle routing applications. We give an exact algorithm for the FIFO case and derive from it an FPTAS, which is computationally efficient. It also features the best known complexity in the static case.
This article is mainly motivated by the urge to answer two kinds of questions regarding the Bundesliga, which is Germany’s primary football (soccer) division having the highest average stadium attendance worldwide: “At any point in the season, what is the lowest final rank a certain team can achieve?” and “At any point in the season, what is the highest final rank a certain team can achieve?”. Although we focus on the Bundesliga in particular, the integer programming formulations we introduce to answer these questions can easily be adapted to a variety of other league systems and tournaments.
Finite-size corrections for the static structure factor of a liquid slab with open boundaries
(2020)
The presence of a confining boundary can modify the local structure of a liquid markedly. In addition, small samples of finite size are known to exhibit systematic deviations of thermodynamic quantities relative to their bulk values. Here, we consider the static structure factor of a liquid sample in slab geometry with open boundaries at the surfaces, which can be thought of as virtually cutting out the sample from a macroscopically large, homogeneous fluid. This situation is a relevant limit for the interpretation of grazing-incidence diffraction experiments at liquid interfaces and films. We derive an exact, closed expression for the slab structure factor, with the bulk structure factor as the only input. This shows that such free boundary conditions cause significant differences between the two structure factors, in particular, at small wavenumbers. An asymptotic analysis of this result yields the scaling exponent and an accurate, useful approximation of these finite-size corrections. Furthermore, the open boundaries permit the interpretation of the slab as an open system, supporting particle exchange with a reservoir. We relate the slab structure factor to the particle number fluctuations and discuss conditions under which the subvolume of the slab represents a grand canonical ensemble with chemical potential μ and temperature T. Thus, the open slab serves as a test-bed for the small-system thermodynamics in a μT reservoir. We provide a microscopically justified and exact result for the size dependence of the isothermal compressibility. Our findings are corroborated by simulation data for Lennard-Jones liquids at two representative temperatures.
We consider the theoretical model of Bergmann and Lebowitz for open systems out of equilibrium and translate its principles in the adaptive resolution simulation molecular dynamics technique. We simulate Lennard-Jones fluids with open boundaries in a thermal gradient and find excellent agreement of the stationary responses with the results obtained from the simulation of a larger locally forced closed system. The encouraging results pave the way for a computational treatment of open systems far from equilibrium framed in a well-established theoretical model that avoids possible numerical artifacts and physical misinterpretations.
Markov chain (MC) algorithms are ubiquitous in machine learning and statistics and many other disciplines. Typically, these algorithms can be formulated as acceptance rejection methods. In this work we present a novel estimator applicable to these methods, dubbed Markov chain importance sampling (MCIS), which efficiently makes use of rejected proposals. For the unadjusted Langevin algorithm, it provides a novel way of correcting the discretization error. Our estimator satisfies a central limit theorem and improves on error per CPU cycle, often to a large extent. As a by-product it enables estimating the normalizing constant, an important quantity in Bayesian machine learning and statistics.
We investigate polyhedral aspects of the Periodic Event Scheduling Problem (PESP), the mathematical basis for periodic timetabling problems in public transport. Flipping the orientation of arcs, we obtain a new class of valid inequalities, the flip inequalities, comprising both the known cycle and change-cycle inequalities. For a point of the LP relaxation, a violated flip inequality can be found in pseudo-polynomial time, and even in linear time for a spanning tree solution. Our main result is that the integer vertices of the polytope described by the flip inequalities are exactly the vertices of the PESP polytope, i.e., the convex hull of all feasible periodic slacks with corresponding modulo parameters. Moreover, we show that this flip polytope equals the PESP polytope in some special cases. On the computational side, we devise several heuristic approaches concerning the separation of cutting planes from flip inequalities. We finally present better dual bounds for the smallest and largest instance of the benchmarking library PESPlib.
The Periodic Event Scheduling Problem is a well-studied NP-hard problem with applications in public transportation to find good periodic timetables. Among the most powerful heuristics to solve the periodic timetabling problem is the modulo network simplex method. In this paper, we consider the more difficult version with integrated passenger routing and propose a refined integrated variant to solve this problem on real-world-based instances.
Hypersurfaces with defect
(2020)
A projective hypersurface X⊆P^n has defect if h^i(X) ≠ h^i(P^n) for some i∈{n,…,2n−2} in a suitable cohomology theory. This occurs for example when X⊆P^4 is not Q-factorial. We show that hypersurfaces with defect tend to be very singular: In characteristic 0, we present a lower bound on the Tjurina number, where X is allowed to have arbitrary isolated singularities. For X with mild singularities, we prove a similar result in positive characteristic. As an application, we obtain an estimate on the asymptotic density of hypersurfaces without defect over a finite field.
It is a challenging task to fairly compare local solvers and heuristics against each other and against global solvers. How does one weigh a faster termination time against a better quality of the found solution? In this paper, we introduce the confined primal integral, a new performance measure that rewards a balance of speed and solution quality. It emphasizes the early part of the solution process by using an exponential decay. Thereby, it avoids that the order of solvers can be inverted by choosing an arbitrarily large time limit. We provide a closed analytic formula to compute the confined primal integral a posteriori and an incremental update formula to compute it during the run of an algorithm. For the latter, we show that we can drop one of the main assumptions of the primal integral, namely the knowledge of a fixed reference solution to compare against. Furthermore, we prove that the confined primal integral is a transitive measure when comparing local solves with different final solution values. Finally, we present a computational experiment where we compare a local MINLP solver that uses certain classes of cutting planes against a solver that does not. Both versions show very different tendencies w.r.t. average running time and solution quality, and we use the confined primal integral to argue which of the two is the preferred setting.
Dual degeneracy, i.e., the presence of multiple optimal bases to a linear programming (LP) problem, heavily affects the solution process of mixed integer programming (MIP) solvers. Different optimal bases lead to different cuts being generated, different branching decisions being taken and different solutions being found by primal heuristics. Nevertheless, only a few methods have been published that either avoid or exploit dual degeneracy. The aim of the present paper is to conduct a thorough computational study on the presence of dual degeneracy for the instances of well-known public MIP instance collections. How many instances are affected by dual degeneracy? How degenerate are the affected models? How does branching affect degeneracy: Does it increase or decrease by fixing variables? Can we identify different types of degenerate MIPs? As a tool to answer these questions, we introduce a new measure for dual degeneracy: the variable–constraint ratio of the optimal face. It provides an estimate for the likelihood that a basic variable can be pivoted out of the basis. Furthermore, we study how the so-called cloud intervals—the projections of the optimal face of the LP relaxations onto the individual variables—evolve during tree search and the implications for reducing the set of branching candidates.
Massive Parallelization for Finding Shortest Lattice Vectors Based on Ubiquity Generator Framework
(2020)
Lattice-based cryptography has received attention as a next-generation encryption technique, because it is believed to be secure against attacks by classical and quantum computers. Its essential security depends on the hardness of solving the shortest vector problem (SVP). In the cryptography, to determine security levels, it is becoming significantly more important to estimate the hardness of the SVP by high-performance computing. In this study, we develop the world’s first distributed and asynchronous parallel SVP solver, the MAssively Parallel solver for SVP (MAP-SVP). It can parallelize algorithms for solving the SVP by applying the Ubiquity Generator framework, which is a generic framework for branch-and-bound algorithms. The MAP-SVP is suitable for massive-scale parallelization, owing to its small memory footprint, low communication overhead, and rapid checkpoint and restart mechanisms. We demonstrate its performance and scalability of the MAP-SVP by using up to 100,032 cores to solve instances of the Darmstadt SVP Challenge.
This paper studies the empirical efficacy and benefits of using projection-free first-order methods in the form of Conditional Gradients, a.k.a. Frank-Wolfe methods, for training Neural Networks with constrained parameters. We draw comparisons both to current state-of-the-art stochastic Gradient Descent methods as well as across different variants of stochastic Conditional Gradients. In particular, we show the general feasibility of training Neural Networks whose parameters are constrained by a convex feasible region using Frank-Wolfe algorithms and compare different stochastic variants. We then show that, by choosing an appropriate region, one can achieve performance exceeding that of unconstrained stochastic Gradient Descent and matching state-of-the-art results relying on L2-regularization. Lastly, we also demonstrate that, besides impacting performance, the particular choice of constraints can have a drastic impact on the learned representations.
The complexity in large-scale optimization can lie in both handling the objective function and handling the constraint set. In this respect, stochastic Frank-Wolfe algorithms occupy a unique position as they alleviate both computational burdens, by querying only approximate first-order information from the objective and by maintaining feasibility of the iterates without using projections. In this paper, we improve the quality of their first-order information by blending in adaptive gradients. We derive convergence rates and demonstrate the computational advantage of our method over the state-of-the-art stochastic Frank-Wolfe algorithms on both convex and nonconvex objectives. The experiments further show that our method can improve the performance of adaptive gradient algorithms for constrained optimization.
Maximal Quadratic-Free Sets
(2020)
The intersection cut paradigm is a powerful framework that facilitates the generation of valid linear inequalities, or cutting planes, for a potentially complex set S. The key ingredients in this construction are a simplicial conic relaxation of S and an S-free set: a convex zone whose interior does not intersect S. Ideally, such S-free set would be maximal inclusion-wise, as it would generate a deeper cutting plane. However, maximality can be a challenging goal in general. In this work, we show how to construct maximal S-free sets when S is defined as a general quadratic inequality. Our maximal S-free sets are such that efficient separation of a vertex in LP-based approaches to quadratically constrained problems is guaranteed. To the best of our knowledge, this work is the first to provide maximal quadratic-free sets.
In most vertebrates the embryonic cartilaginous skeleton is replaced by bone during development. During this process, cartilage cells (chondrocytes) mineralize the extracellular matrix and undergo apoptosis, giving way to bone cells (osteocytes). In contrast, sharks and rays (elasmobranchs) have cartilaginous skeletons throughout life, where only the surface mineralizes, forming a layer of tiles (tesserae). Elasmobranch chondrocytes, unlike those of other vertebrates, survive cartilage mineralization and are maintained alive in spaces (lacunae) within tesserae. However, the function(s) of the chondrocytes in the mineralized tissue remain unknown. Applying a custom analysis workflow to high-resolution synchrotron microCT scans of tesserae, we characterize the morphologies and arrangements of stingray chondrocyte lacunae, using lacunar morphology as a proxy for chondrocyte morphology. We show that the cell density is comparable in unmineralized and mineralized tissue from our study species and that cells maintain the similar volume even when they have been incorporated into tesserae. This discovery supports previous hypotheses that elasmobranch chondrocytes, unlike those of other taxa, do not proliferate, hypertrophy or undergo apoptosis during mineralization. Tessera lacunae show zonal variation in their shapes—being flatter further from and more spherical closer to the unmineralized cartilage matrix and larger in the center of tesserae— and show pronounced organization into parallel layers and strong orientation toward neighboring tesserae. Tesserae also exhibit local variation in lacunar density, with the density considerably higher near pores passing through the tesseral layer, suggesting pores and cells interact (e.g. that pores contain a nutrient source). We hypothesize that the different lacunar types reflect the stages of the tesserae formation process, while also representing local variation in tissue architecture and cell function. Lacunae are linked by small passages (canaliculi) in the matrix to form elongate series at the tesseral periphery and tight clusters in the center of tesserae, creating a rich connectivity among cells. The network arrangement and the shape variation of chondrocytes in tesserae indicate that cells may interact within and between tesserae and manage mineralization differently from chondrocytes in other vertebrates, perhaps performing analogous roles to osteocytes in bone.
In most vertebrates the embryonic cartilaginous skeleton is replaced by bone during development. During this process, cartilage cells (chondrocytes) mineralize the extracellular matrix and undergo apoptosis, giving way to bone cells (osteocytes). In contrast, sharks and rays (elasmobranchs) have cartilaginous skeletons throughout life, where only the surface mineralizes, forming a layer of tiles (tesserae). Elasmobranch chondrocytes, unlike those of other vertebrates, survive cartilage mineralization and are maintained alive in spaces (lacunae) within tesserae. However, the function(s) of the chondrocytes in the mineralized tissue remain unknown. Applying a custom analysis workflow to high-resolution synchrotron microCT scans of tesserae, we characterize the morphologies and arrangements of stingray chondrocyte lacunae, using lacunar morphology as a proxy for chondrocyte morphology. We show that the cell density is comparable in unmineralized and mineralized tissue from our study species and that cells maintain the similar volume even when they have been incorporated into tesserae. This discovery supports previous hypotheses that elasmobranch chondrocytes, unlike those of other taxa, do not proliferate, hypertrophy or undergo apoptosis during mineralization. Tessera lacunae show zonal variation in their shapes—being flatter further from and more spherical closer to the unmineralized cartilage matrix and larger in the center of tesserae— and show pronounced organization into parallel layers and strong orientation toward neighboring tesserae. Tesserae also exhibit local variation in lacunar density, with the density considerably higher near pores passing through the tesseral layer, suggesting pores and cells interact (e.g. that pores contain a nutrient source). We hypothesize that the different lacunar types reflect the stages of the tesserae formation process, while also representing local variation in tissue architecture and cell function. Lacunae are linked by small passages (canaliculi) in the matrix to form elongate series at the tesseral periphery and tight clusters in the center of tesserae, creating a rich connectivity among cells. The network arrangement and the shape variation of chondrocytes in tesserae indicate that cells may interact within and between tesserae and manage mineralization differently from chondrocytes in other vertebrates, perhaps performing analogous roles to osteocytes in bone.
Historische Fotos online zu stellen, ist rechtlich betrachtet in vielfacher Hinsicht problematisch. Zum einen ist dies nur zulässig, wenn hierfür entsprechende urheberrechtliche Nutzungsrechte vorliegen oder die Fotos inzwischen gemeinfrei sind. Allerdings unterscheidet das Recht zwischen Fotos als Werken und bloßen „Knipsbildern“, was sich vor allem darauf auswirkt, wie lange Fotos urheberrechtlich geschützt sind. Auf die urheberrechtlichen Fragen bei der Online-Veröffentlichung von Fotos wird im ersten Teil eingegangen.
Sind auf Fotos Personen erkennbar, so sind auch die Persönlichkeitsrechte der Abgebildeten zu beachten. In der Regel ist die Online-Veröffentlichung nur mit Zustimmung der Abgebildeten zulässig. Nur in bestimmten Ausnahmefällen dürfen solche Fotos auch ohne ausdrückliche Zustimmung genutzt werden. Hiermit beschäftigt sich der zweite Teil dieser kleinen Handreichung
A new ion mobility (IM) spectrometer, enabling mobility measurements in the pressure range between 5 and 500 mbar and in the reduced field strength range E/N of 5–90 Td, was developed and characterized. Reduced mobility (K0) values were studied under low E/N (constant value) as well as high E/N (deviation from low field K0) for a series of molecular ions in nitrogen. Infrared matrix-assisted laser desorption ionization (IR-MALDI) was used in two configurations: a source working at atmospheric pressure (AP) and, for the first time, an IR-MALDI source working with a liquid (aqueous) matrix at sub-ambient/reduced pressure (RP). The influence of RP on IR-MALDI was examined and new insights into the dispersion process were gained. This enabled the optimization of the IM spectrometer for best analytical performance. While ion desolvation is less efficient at RP, the transport of ions is more efficient, leading to intensity enhancement and an increased number of oligomer ions. When deciding between AP and RP IR-MALDI, a trade-off between intensity and resolving power has to be considered. Here, the low field mobility of peptide ions was first measured and compared with reference values from ESI-IM spectrometry (at AP) as well as collision cross sections obtained from molecular dynamics simulations. The second application was the determination of the reduced mobility of various substituted ammonium ions as a function of E/N in nitrogen. The mobility is constant up to a threshold at high E/N. Beyond this threshold, mobility increases were observed. This behavior can be explained by the loss of hydrated water molecules.
Large capacity Storage Class Memory (SCM) opens new possibilities for workloads requiring a large memory footprint. We examine optimization strategies for a legacy Fortran application on systems with an heterogeneous memory configuration comprising SCM and DRAM. We present a performance study for the multigrid solver component of the large-eddy simulation framework PALM for different memory configurations with large capacity SCM. An important optimization approach is the explicit assignment of storage locations depending on the data access characteristic to take advantage of the heterogeneous memory configuration. We are able to demonstrate that an explicit control over memory locations provides better performance compared to transparent hardware settings. As on aforementioned systems the page management by the OS appears as critical performance factor, we study the impact of different huge page settings.
In this article we introduce a Minimum Cycle Partition Problem with Length Requirements (CPLR). This generalization of the Travelling Salesman Problem (TSP) originates from routing Unmanned Aerial Vehicles (UAVs). Apart from nonnegative edge weights, CPLR has an individual critical weight value associated with each vertex. A cycle partition, i.e., a vertex disjoint cycle cover, is regarded as a feasible solution if the length of each cycle, which is the sum of the weights of its edges, is not greater than the critical weight of each of its vertices. The goal is to find a feasible partition, which minimizes the number of cycles. In this article, a heuristic algorithm is presented together with a Mixed Integer Programming (MIP) formulation of CPLR. We furthermore introduce a conflict graph, whose cliques yield valid constraints for the MIP model. Finally, we report on computational experiments conducted on TSPLIB-based test instances.
We present the tamper-resistant broadcast abstraction of the Bitcoin blockchain, and show how it can be used to implement tamper-resistant replicated state machines. The tamper-resistant broadcast abstraction provides functionality to: broadcast, deliver, and verify messages. The tamper-resistant property ensures: 1) the probabilistic protection against byzantine behaviour, and 2) the probabilistic verifiability that no tampering has occurred.
In this work, we study various tamper-resistant broadcast protocols for: different environmental models (public/permissioned, bounded/unbounded, byzantine fault tolerant (BFT)/non-BFT, native/non-native); as well as different properties, such as ordering guarantees (FIFO-order, causal-order, total-order), and delivery guarantees (validity, agreement, uniform). This way, we can match the protocol to the required environment model and consistency model of the replicated state machine.
We implemented the tamper-resistant broadcast abstraction as a proof of concept. The results show that the implemented tamper-resistant broadcast protocols can compete on throughput and latency with other state-of-the-art broadcast technologies. Use cases, such as a tamper-resistant file system, supply chain tracking, and a timestamp server highlight the expressiveness of the abstraction.
In conclusion, the tamper-resistant broadcast protocols provide a powerful interface, with clear semantics and tunable settings, enabling the design of tamper-resistant applications.
Motivated by the desire to numerically calculate rigorous upper and lower bounds on deviation probabilities over large classes of probability distributions, we present an adaptive algorithm for the reconstruction of increasing real-valued functions. While this problem is similar to the classical statistical problem of isotonic regression, the optimisation setting alters several characteristics of the problem and opens natural algorithmic possibilities. We present our algorithm, establish sufficient conditions for convergence of the reconstruction to the ground truth, and apply the method to synthetic test cases and a real-world example of uncertainty quantification for aerodynamic design.
During a two year period between 2014 and 2016 the coma of comet 67P/Churyumov-Gerasimenko (67P/C-G) has been probed by the Rosetta spacecraft. Density data for 14 gas species was recorded with the COmet Pressure Sensor (COPS) and the Double Focusing Mass Spectrometer (DFMS) being two sensors of the ROSINA instrument. The combination with an inverse gas model yields emission rates on each of 3996 surface elements of a surface shape for the cometary nucleus.
The temporal evolution of gas production, of relative abundances, and peak productions weeks after perihelion are investigated. Solar irradiation and gas production are in a complex relation revealing features differing for gas species, for mission time, and for the hemispheres of the comet. This characterization of gas composition allows one to correlate 67P/C-G to other solar and interstellar comets, their formation conditions and nucleus properties, see [Bodewits D., et al., 2020 Nature Astronomy].
The determination of non-gravitational forces based on precise astrometry is one of the main tools to establish the cometary character of interstellar and solar-system objects. The Rosetta mission to comet 67P/C-G provided the unique opportunity to benchmark Earth-bound estimates of non-gravitational forces with in-situ data. We determine the accuracy of the standard Marsden and Sekanina parametrization of non-gravitational forces with respect to the observed dynamics. Additionally we analyse the rotation-axis changes (orientation and period) of 67P/C-G. This comparison provides a reference case for future cometary missions and sublimation models for non-gravitational forces.
We present an automated method for extrapolating missing
regions in label data of the skull in an anatomically plausible manner. The ultimate goal is to design patient-specic cranial implants for correcting large, arbitrarily shaped defects of the skull that can, for example, result from trauma of the head. Our approach utilizes a 3D statistical shape model (SSM) of the skull and a 2D generative adversarial network (GAN) that is trained in an unsupervised fashion from samples of healthy patients alone. By tting the SSM to given input labels containing the skull defect, a First approximation of the healthy state of the patient is obtained. The GAN is then applied to further correct and smooth the output of the SSM in an anatomically plausible manner. Finally, the defect region is extracted using morphological operations and subtraction between the extrapolated healthy state of the patient and the defective input labels. The method is trained and evaluated based on data from the MICCAI 2020 AutoImplant challenge. It produces state-of-the art results on regularly
shaped cut-outs that were present in the training and testing data of the challenge. Furthermore, due to unsupervised nature of the approach, the method generalizes well to previously unseen defects of varying shapes that were only present in the hidden test dataset.
The well-known network simplex algorithm is a powerful tool to solve flow problems on graphs. Based on a recent dissertation by Isabel Beckenbach, we develop the necessary theory to extend the network simplex to capacitated flow problems on hypergraphs and implement this new variant. We then attempt to solve instances arising from real-life vehicle rotation planning problems.
This study examines the usability of a real-world, large-scale natural gas transport infrastructure for hydrogen transport. We investigate whether a converted network can transport the amounts of hydrogen necessary to satisfy current energy demands. After introducing an optimization model for the robust transient control of hydrogen networks, we conduct computational experiments based on real-world demand scenarios. Using a representative network, we demonstrate that replacing each turbo compressor unit by four parallel hydrogen compressors, each of them comprising multiple serial compression stages, and imposing stricter rules regarding the balancing of in- and outflow suffices to realize transport in a majority of scenarios. However, due to the reduced linepack there is an increased need for technical and non-technical measures leading to a more dynamic network control. Furthermore, the amount of energy needed for compression increases by 364% on average.
The coma of comet 67P/Churyumov-Gerasimenko has been probed by the Rosetta spacecraft and shows a variety of different molecules. The ROSINA COmet Pressure Sensor and the Double Focusing Mass Spectrometer provide in-situ densities for many volatile compounds including the 14 gas species H2O, CO2, CO, H2S, O2, C2H6, CH3OH, H2CO, CH4, NH3, HCN, C2H5OH, OCS, and CS2. We fit the observed densities during the entire comet mission between August 2014 and September 2016 to an inverse coma model. We retrieve surface emissions on a cometary shape with 3996 triangular elements for 50 separated time intervals. For each gas we derive systematic error bounds and report the temporal evolution of the production, peak production, and the time-integrated total production. We discuss the production for the two lobes of the nucleus and for the northern and southern hemispheres. Moreover we provide a comparison of the gas production with the seasonal illumination.