Refine
Year of publication
Document Type
- Article (2097)
- ZIB-Report (1734)
- In Proceedings (1293)
- Master's Thesis (282)
- Book chapter (163)
- Doctoral Thesis (161)
- Other (97)
- Bachelor's Thesis (68)
- In Collection (66)
- Book (65)
Is part of the Bibliography
- no (6183) (remove)
Keywords
- integer programming (36)
- KOBV (33)
- mixed integer programming (33)
- optimal control (32)
- Annual Report (28)
- Kooperativer Bibliotheksverbund Berlin-Brandenburg (24)
- Bibliotheksverbund (20)
- Mixed Integer Programming (20)
- Integer Programming (18)
- mixed-integer programming (18)
Institute
- Numerical Mathematics (1483)
- Mathematical Optimization (1253)
- Visual and Data-centric Computing (1123)
- Visual Data Analysis (1012)
- ZIB Allgemein (951)
- Computational Nano Optics (492)
- Distributed Algorithms and Supercomputing (485)
- Modeling and Simulation of Complex Processes (379)
- Visual Data Analysis in Science and Engineering (377)
- Therapy Planning (315)
Any sports competition needs a timetable, specifying when and where teams meet each other. The recent International Timetabling Competition (ITC2021) on sports timetabling showed that, although it is possible to develop general algorithms, the performance of each algorithm varies considerably over the problem instances. This paper provides a problem type analysis for sports timetabling, resulting in powerful insights into the strengths and weaknesses of eight state-of-the-art algorithms. Based on machine learning techniques, we propose an algorithm selection system that predicts which algorithm is likely to perform best based on the type of competition and constraints being used (i.e., the problem type) in a given sports timetabling problem instance. Furthermore, we visualize how the problem type relates to algorithm performance, providing insights and possibilities to further enhance several algorithms. Finally, we assess the empirical hardness of the instances. Our results are based on large computational experiments involving about 50 years of CPU time on more than 500 newly generated problem instances.
The Jacobi set of a bivariate scalar field is the set of points where the gradients of the two constituent scalar fields align with each other. It captures the regions of topological changes in the bivariate field. The Jacobi set is a bivariate analog of critical points, and may correspond to features of interest. In the specific case of time-varying fields and when one of the scalar fields is time, the Jacobi set corresponds to temporal tracks of critical points, and serves as a feature-tracking graph. The Jacobi set of a bivariate field or a time-varying scalar field is complex, resulting in cluttered visualizations that are difficult to analyze. This paper addresses the problem of Jacobi set simplification. Specifically, we use the time-varying scalar field scenario to introduce a method that computes a reduced Jacobi set. The method is based on a stability measure called robustness that was originally developed for vector fields and helps capture the structural stability of critical points. We also present a mathematical analysis for the method, and describe an implementation for 2D time-varying scalar fields. Applications to both synthetic and real-world datasets demonstrate the effectiveness of the method for tracking features.
Computing an optimal cycle in a given homology class, also referred to as the homology localization problem, is known to be an NP-hard problem in general. Furthermore, there is currently no known optimality criterion that localizes classes geometrically and admits a stability property under the setting of persistent homology. We present a geometric optimization of the cycles that is computable in polynomial time and is stable in an approximate sense. Tailoring our search criterion to different settings, we obtain various optimization problems like optimal homologous cycle, minimum homology basis, and minimum persistent
homology basis. In practice, the (trivial) exact algorithm is computationally expensive despite having a worst case polynomial runtime. Therefore, we design approximation algorithms for the above problems and study their performance experimentally. These algorithms have reasonable runtimes for moderate sized datasets and the cycles computed by these algorithms
are consistently of high quality as demonstrated via experiments on multiple datasets.
The Bay of Bengal (BoB) has maintained its salinity distribution over the years despite a continuous flow of fresh water entering it through rivers on the northern coast, which is capable of diluting the salinity. This can be attributed to the cyclic flow of high salinity water (>= 35 psu), coming from Arabian sea and entering BoB from the south, which moves northward and mixes with this fresh water. The movement of this high salinity water has been studied and analyzed in previous work (Singh et al., 2022). This paper extends the computational methods and analysis of salinity movement. Specifically, we introduce an advection based feature definition that represents the movement of high salinity water, and describe algorithms to track their evolution over time. This method allows us to trace the movement of high salinity water caused due to ocean currents. The method is validated via comparison with established observations on the flow of high salinity water in the BoB, including its entry from the Arabian Sea and its movement near Sri Lanka. Further, the visual analysis and tracking framework enables us to compare it with previous work and analyze the contribution of advection to salinity transport.
Studying neural mechanisms in complementary model organisms from different ecological niches in the same animal class can leverage the comparative brain analysis at the cellular level. To advance such a direction, we developed a unified brain atlas platform and specialized tools that allowed us to quantitatively compare neural structures in two teleost larvae, medaka (Oryzias latipes) and zebrafish (Danio rerio). Leveraging this quantitative approach we found that most brain regions are similar but some subpopulations are unique in each species. Specifically, we confirmed the existence of a clear dorsal pallial region in the telencephalon in medaka lacking in zebrafish. Further, our approach allows for extraction of differentially expressed genes in both species, and for quantitative comparison of neural activity at cellular resolution. The web-based and interactive nature of this atlas platform will facilitate the teleost community’s research and its easy extensibility will encourage contributions to its continuous expansion.
For industries like the cement industry, switching to a carbon-neutral production process is impossible. They must rely on carbon capture, utilization and storage (CCUS) technologies to reduce their production processes’ inevitable carbon dioxide (CO2) emissions. For transporting continuously large amounts of CO2, utilizing a pipeline network is the most effective solution; however, building such a network is expensive. Therefore minimizing the cost of the pipelines to be built is extremely important to make the operation financially feasible. In this context, we investigate the problem of finding optimal pipeline diameters from a discrete set of diameters for a tree-shaped network transporting captured CO2 from multiple sources to a single sink. The general problem of optimizing arc capacities in potential-based fluid networks is already a challenging mixed-integer nonlinear optimization problem. The problem becomes even more complex when adding the highly sensitive nonlinear behavior of CO2 regarding temperature and pressure changes. We propose an iterative algorithm splitting the problem into two parts: a) the pipe-sizing problem under a fixed supply scenario and temperature distribution and b) the thermophysical modeling, including mixing effects, the Joule-Thomson effect, and heat exchange with the surrounding environment. We demonstrate the effectiveness of our approach by applying our algorithm to a real-world network planning problem for a CO2 network in Western Germany. Further, we show the robustness of the algorithm by solving a large artificially created set of network instances.
The landscape of applications and subroutines relying on shortest path computations continues to grow steadily. This growth is driven by the undeniable success of shortest path algorithms in theory and practice. It also introduces new challenges as the models and assessing the optimality of paths become more complicated. Hence, multiple recent publications in the field adapt existing labeling methods in an ad hoc fashion to their specific problem variant without considering the underlying general structure: they always deal with multi-criteria scenarios, and those criteria define different partial orders on the paths. In this paper, we introduce the partial order shortest path problem (POSP), a generalization of the multi-objective shortest path problem (MOSP) and in turn also of the classical shortest path problem. POSP captures the particular structure of many shortest path applications as special cases. In this generality, we study optimality conditions or the lack of them, depending on the objective functions’ properties. Our final contribution is a big lookup table summarizing our findings and providing the reader with an easy way to choose among the most recent multi-criteria shortest path algorithms depending on their problems’ weight structure. Examples range from time-dependent shortest path and bottleneck path problems to the electric vehicle shortest path problem with recharging and complex financial weight functions studied in the public transportation community. Our results hold for general digraphs and, therefore, surpass previous generalizations that were limited to acyclic graphs.
The investigation of energy transition paths toward a sustainable and decarbonized future under uncertainty is a critical aspect of contemporary energy planning and policy development. There are numerous methods for analysing uncertainties and sensitivities and many studies on sustainable transformation paths, but there is a lack of combined application to relevant use-cases.
In this study, we investigate the sensitivity of energy transition paths to uncertainties in operational and investment costs of power plants in the metropolitan area of Berlin and its rural surroundings.
By employing the linear programming energy system model oemof-B3, we extensively focus on the system's energy technologies, such as wind turbines, photovoltaics, hydro and combustion plants, and energy storages. Greenhouse gas reduction and electrification rates per commodity are realized by selected constraints.
Our research aims to discern how investments in energy production capacities are influenced by uncertainties of other energy technologies' investment and operational costs in the system. We apply a quantitative approach to investigate such interdependencies of cost variations and their impact on long-term energy planning. Thus, the analysis sheds light on the robustness of energy transition paths in the face of these uncertainties.
The region Berlin-Brandenburg serves as a case study and thus reflects on the present space conflicts to meet energy demands in urban and suburban areas and their rural surroundings. An electricity-intensive scenario is selected that assumes a 100 % reduction in greenhouse gas emissions by 2050. With the results of the case study, we show how our approach enables rural and metropolitan decision-makers to collaborate in achieving sustainable energy.
Decision-making in long-term energy planning can be made more robust and flexible by acknowledging the identified sensitivities and enable such regions better to navigate challenges and uncertainties associated with sustainable energy planning.
We study a complex planning and scheduling problem arising from the build-up process of air cargo pallets and containers, collectively referred to as unit load devices (ULD), in which ULDs must be assigned to workstations for loading. Since air freight usually becomes available gradually along the planning horizon, ULD build-ups must be scheduled neither too early to avoid underutilizing ULD capacity, nor too late to avoid resource conflicts with other flights. Whenever possible, ULDs should be built up in batches, thereby giving ground handlers more freedom to rearrange cargo and utilize the ULD's capacity efficiently. The resulting scheduling problem has an intricate cost function and produces large time-expanded models, especially for longer planning horizons. We propose a logic-based Benders decomposition approach that assigns batches to time intervals and workstations in the master problem, while the actual schedule is decided in a subproblem. By choosing appropriate intervals, the subproblem becomes a feasibility problem that decomposes over the workstations. Additionally, the similarity of many batches is exploited by a strengthening procedure for no-good cuts. We benchmark our approach against a time-expanded MIP formulation from the literature on a publicly available data set. It solves 15% more instances to optimality and decreases run times by more than 50% in the geometric mean. This improvement is especially pronounced for longer planning horizons of up to one week, where the Benders approach solves over 50% instances more than the baseline
The imperative to decarbonize energy systems has intensified the need for efficient transformations within the heating sector, with a particular focus on district heating networks. This study addresses this challenge by proposing a comprehensive optimization approach evaluated on the district heating
network of the Märkisches Viertel of Berlin. Our objective is to simultaneously optimize heat production with three targets: minimizing costs, minimizing CO2-emissions, and maximizing heat generation from Combined Heat and Power (CHP) plants for enhanced efficiency.
To tackle this optimization problem, we employed a Mixed-Integer Linear Program (MILP) that encompasses the conversion of various fuels into heat and power, integration with relevant markets, and considerations for technical constraints on power plant operation. These constraints include startup
and minimum downtime, activation costs, and storage limits. The ultimate goal is to delineate the Pareto front, representing the optimal trade-offs between the three targets. We evaluate variants of the 𝜖-constraint algorithm for their effectiveness in coordinating these objectives, with a simultaneous focus on the quality of the estimated Pareto front and computational efficiency. One algorithm explores solutions on an evenly spaced grid in the objective space, while another dynamically adjusts the grid based on identified solutions. Initial findings highlight the strengths and limitations of each algorithm, providing guidance on algorithm selection depending on desired outcomes and computational constraints.
Our study emphasizes that the optimal choice of algorithm hinges on the density and distribution of solutions in the feasible space. Whether solutions are clustered or evenly distributed significantly influences algorithm performance. These insights contribute to a nuanced understanding of algorithm selection for multi-objective multi-energy system optimization, offering valuable guidance for future research and practical applications for planning sustainable district heating networks.
Modeling-Simulation-Optimization workflows play a fundamental role in applied mathematics. The Mathematical Research Data Initiative, MaRDI, responded to this by developing a FAIR and machine-interpretable template for a comprehensive documentation of such workflows. MaRDMO, a Plugin for the Research Data Management Organiser, enables scientists from diverse fields to document and publish their workflows on the MaRDI Portal seamlessly using the MaRDI template. Central to these workflows are mathematical models. MaRDI addresses them with the MathModDB ontology, offering a structured formal model description. Here, we showcase the interaction between MaRDMO and the MathModDB Knowledge Graph through an algebraic modeling workflow from the Digital Humanities. This demonstration underscores the versatility of both services beyond their original numerical domain.
At present, data management plans (DMPs) are still often perceived as mere documents for funding agencies providing clarity on how research data will be handled during a funded project, but are not usually actively involved in the processes. However, they contain a great deal of information that can be shared automatically to facilitate active research data management (RDM) by providing metadata to research infrastructures and supporting communication between all involved stakeholders. This position paper brings together a number of ideas developed and collected during interdisciplinary workshops of the Data Management Planning Working Group (infra-dmp), which is part of the section Common Infrastructures of the National Research Data Infrastructure (NFDI) in Germany. We present our vision of a possible future role of DMPs, templates, and tools in the upcoming NFDI service architecture.
In applied mathematics and related disciplines, the modeling-simulation-optimization workflow is a prominent scheme, with mathematical models and numerical algorithms playing a crucial role. For these types of mathematical research data, the Mathematical Research Data Initiative has developed, merged and implemented ontologies and knowledge graphs. This contributes to making mathematical research data FAIR by introducing semantic technology and documenting the mathematical foundations accordingly. Using the concrete example of microfracture analysis of porous media, it is shown how the knowledge of the underlying mathematical model and the corresponding numerical algorithms for its solution can be represented by the ontologies.
Ontologies and knowledge graphs for mathematical algorithms and models are presented, that have been developed by the Mathematical Research Data Initiative. This enables FAIR data handling in mathematics and the applied disciplines. Moreover, challenges of harmonization during the ontology development are discussed.
MaRDMO Plugin
(2023)
MaRDMO, a plugin for the Research Data Management Organiser, was developed in the Mathematical Research Data Initiative to document interdisciplinary workflows using a standardised scheme. Interdisciplinary workflows recorded this way are published directly on the MaRDI portal. In addition, central information is integrated into the MaRDI knowledge graph. Next to the documentation, MaRDMO offers the possibility to retrieve existing interdisciplinary workflows from the MaRDI Knowledge Graph to allow the reproduction of the initial work and to provide scientists with new researchimpulses. Thus, MaRDMO creates a community-driven knowledge loop that could help to overcome the replication crisis.
Research data are crucial in mathematics and all scientific disciplines, as they form the
foundation for empirical evidence, by enabling the validation and reproducibility of scientific findings. Mathematical research data (MathRD) have become vast and complex, and their interdisciplinary potential and abstract nature make them ubiquitous in various scientific fields. The volume of data and the velocity of its creation are rapidly increasing due to advancements in data science and computing power. This complexity extends to other disciplines, resulting in diverse research data and computational models. Thus, proper handling of research data is crucial both within mathematics and for its manifold connections and exchange with other disciplines. The National Research Data Infrastructure (NFDI), funded by the federal and state governments of Germany, consists of discipline-oriented consortia, including the Mathematical Research Data Initiative (MaRDI). MaRDI has been established to develop services, guidelines and outreach measures for all aspects of MathRD, and thus support the mathematical research community. Research data management (RDM) should be an integral component of every scientific project, and is becoming a mandatory component of grants with funding bodies such as the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation). At the core of RDM are the FAIR (Findable, Accessible, Interoperable, and Reusable) principles. This document aims to guide mathematicians and researchers from related disciplines who create RDM plans. It highlights the benefits and opportunities of RDM in mathematics and interdisciplinary studies, showcases examples of diverse MathRD, and suggests technical solutions that meet the requirements of funding agencies with specific examples. The document is regularly updated to reflect the latest developments within the mathematical community represented by MaRDI.
Voronoi Graph - Improved raycasting and integration schemes for high dimensional Voronoi diagrams
(2024)
The computation of Voronoi Diagrams, or their dual Delauney triangulations is difficult in high dimensions. In a recent publication Polianskii and Pokorny propose an iterative randomized algorithm facilitating the approximation of Voronoi tesselations in high dimensions. In this paper, we provide an improved vertex search method that is not only exact but even faster than the bisection method that was previously recommended. Building on this we also provide a depth-first graph-traversal algorithm which allows us to compute the entire Voronoi diagram. This enables us to compare the outcomes with those of classical algorithms like qHull, which we either match or marginally beat in terms of computation time. We furthermore show how the raycasting algorithm naturally lends to a Monte Carlo approximation for the volume and boundary integrals of the Voronoi cells, both of which are of importance for finite Volume methods. We compare the Monte-Carlo methods to the exact polygonal integration, as well as a hybrid approximation scheme.
Introduction and application of a new approach for model-based optical bidirectional measurements
(2024)
The Koopman operator has entered and transformed many research areas over the last years. Although the underlying concept–representing highly nonlinear dynamical systems by infinite-dimensional linear operators–has been known for a long time, the availability of large data sets and efficient machine learning algorithms for estimating the Koopman operator from data make this framework extremely powerful and popular. Koopman operator theory allows us to gain insights into the characteristic global properties of a system without requiring detailed mathematical models. We will show how these methods can also be used to analyze complex networks and highlight relationships between Koopman operators and graph Laplacians.
This paper is concerned with collective variables, or reaction coordinates, that map a discrete-in-time Markov process X_n in R^d to a (much) smaller dimension k≪d. We define the effective dynamics under a given collective variable map ξ as the best Markovian representation of X_n under ξ. The novelty of the paper is that it gives strict criteria for selecting optimal collective variables via the properties of the effective dynamics. In particular, we show that the transition density of the effective dynamics of the optimal collective variable solves a relative entropy minimization problem from certain family of densities to the transition density of X_n. We also show that many transfer operator-based data-driven numerical approaches essentially learn quantities of the effective dynamics. Furthermore, we obtain various error estimates for the effective dynamics in approximating dominant timescales / eigenvalues and transition rates of the original process X_n and how optimal collective variables minimize these errors. Our results contribute to the development of theoretical tools for the understanding of complex dynamical systems, e.g. molecular kinetics, on large timescales. These results shed light on the relations among existing data-driven numerical approaches for identifying good collective variables, and they also motivate the development of new methods.
The multi-grid reaction-diffusion master equation (mgRDME) provides a generalization of stochastic compartment-based reaction-diffusion modelling described by the standard reaction-diffusion master equation (RDME). By enabling different resolutions on lattices for biochemical species with different diffusion constants, the mgRDME approach improves both accuracy and efficiency of compartment-based reaction-diffusion simulations. The mgRDME framework is examined through its application to morphogen gradient formation in stochastic reaction-diffusion scenarios, using both an analytically tractable first-order reaction network and a model with a second-order reaction. The results obtained by the mgRDME modelling are compared with the standard RDME model and with the (more detailed) particle-based Brownian dynamics simulations. The dependence of error and numerical cost on the compartment sizes is defined and investigated through a multi-objective optimization problem.
Existing planning approaches for onshore wind farm siting and grid integration often do not meet minimum cost solutions or social and environmental considerations. In this paper, we develop an exact approach for the integrated layout and cable routing problem of onshore wind farm planning using the Quota Steiner tree problem. Applying a novel transformation on a known directed cut formulation, reduction techniques, and heuristics, we design an exact solver that makes large problem instances solvable and outperforms generic MIP solvers. In selected regions of Germany, the trade-offs between minimizing costs and landscape impact of onshore wind farm siting are investigated. Although our case studies show large trade-offs between the objective criteria of cost and landscape impact, small burdens on one criterion can significantly improve the other criteria. In addition, we demonstrate that contrary to many approaches for exclusive turbine siting, grid integration must be simultaneously optimized to avoid excessive costs or landscape impacts in the course of a wind farm project. Our novel problem formulation and the developed solver can assist planners in decision-making and help optimize wind farms in large regions in the future.
AI-guided pipeline for protein–protein interaction drug discovery identifies a SARS-CoV-2 inhibitor
(2024)
Protein–protein interactions (PPIs) offer great opportunities to expand the druggable proteome and therapeutically tackle various diseases, but remain challenging targets for drug discovery. Here, we provide a comprehensive pipeline that combines experimental and computational tools to identify and validate PPI targets and perform early-stage drug discovery. We have developed a machine learning approach that prioritizes interactions by analyzing quantitative data from binary PPI assays or AlphaFold-Multimer predictions. Using the quantitative assay LuTHy together with our machine learning algorithm, we identified high-confidence interactions among SARS-CoV-2 proteins for which we predicted three-dimensional structures using AlphaFold-Multimer. We employed VirtualFlow to target the contact interface of the NSP10-NSP16 SARS-CoV-2 methyltransferase complex by ultra-large virtual drug screening. Thereby, we identified a compound that binds to NSP10 and inhibits its interaction with NSP16, while also disrupting the methyltransferase activity of the complex, and SARS-CoV-2 replication. Overall, this pipeline will help to prioritize PPI targets to accelerate the discovery of early-stage drug candidates targeting protein complexes and pathways.
We present a heuristic solution approach for the rolling stock rotation problem with predictive maintenance (RSRP-PdM). The task of this problem is to assign a sequence of trips to each of the vehicles and to schedule their maintenance such that all trips can be operated. Here, the health states of the vehicles are considered to be random variables distributed by a family of probability distribution functions, and the maintenance services should be scheduled based on the failure probability of the vehicles. The proposed algorithm first generates a solution by solving an integer linear program and then heuristically improves this solution by applying a local search procedure. For this purpose, the trips assigned to the vehicles are split up and recombined, whereby additional deadhead trips can be inserted between the partial assignments. Subsequently, the maintenance is scheduled by solving a shortest path problem in a state-expanded version of a space-time graph restricted to the trips of the individual vehicles. The solution approach is tested and evaluated on a set of test instances based on real-world timetables.
An Iterative Refinement Approach for the Rolling Stock Rotation Problem with Predictive Maintenance
(2024)
The rolling stock rotation problem with predictive maintenance (RSRP-PdM) involves the assignment of trips to a fleet of vehicles with integrated maintenance scheduling based on the predicted failure probability of the vehicles. These probabilities are determined by the health states of the vehicles, which are considered to be random variables distributed by a parameterized family of probability distribution functions. During the operation of the trips, the corresponding parameters get updated. In this article, we present a dual solution approach for RSRP-PdM and generalize a linear programming based lower bound for this problem to families of probability distribution functions with more than one parameter. For this purpose, we define a rounding function that allows for a consistent underestimation of the parameters and model the problem by a state-expanded event-graph in which the possible states are restricted to a discrete set. This induces a flow problem that is solved by an integer linear program. We show that the iterative refinement of the underlying discretization leads to solutions that converge from below to an optimal solution of the original instance. Thus, the linear relaxation of the considered integer linear program results in a lower bound for RSRP-PdM. Finally, we report on the results of computational experiments conducted on a library of test instances.
Die Metadatenqualität bestimmt wesentlich den Nutzen und Wert von Kulturerbedaten. ‚Gute‘ Metadaten erhöhen die Auffindbarkeit, Interoperabilität und Nutzbarkeit von Daten signifikant. Mit Blick auf Retrieval bzw. Discovery, Vernetzung im Kontext von Linked Open Data und
wissenschaftliches Data Mining hängt die Qualität dabei wesentlich von der Verwendung von maschinenlesbaren kontrollierten Vokabularen ab. Diese wird in der vorliegenden Arbeit quantitativ untersucht. Als Datengrundlage dienen die in der Deutschen Digitalen Bibliothek aggregierten Metadaten aus Berliner Museen (ca. 1,2 Millionen Metadatenobjekte im LIDO-Format)
Collaborative comparisons and combinations of epidemic models are used as policy-relevant evidence during epidemic outbreaks. In the process of collecting multiple model projections, such collaborations may gain or lose relevant information. Typically, modellers contribute a probabilistic summary at each time-step. We compared this to directly collecting simulated trajectories. We aimed to explore information on key epidemic quantities; ensemble uncertainty; and performance against data, investigating potential to continuously gain information from a single cross-sectional collection of model results. Methods We compared July 2022 projections from the European COVID-19 Scenario Modelling Hub. Five modelling teams projected incidence in Belgium, the Netherlands, and Spain. We compared projections by incidence, peaks, and cumulative totals. We created a probabilistic ensemble drawn from all trajectories, and compared to ensembles from a median across each model’s quantiles, or a linear opinion pool. We measured the predictive accuracy of individual trajectories against observations, using this in a weighted ensemble. We repeated this sequentially against increasing weeks of observed data. We evaluated these ensembles to reflect performance with varying observed data. Results. By collecting modelled trajectories, we showed policy-relevant epidemic characteristics. Trajectories contained a right-skewed distribution well represented by an ensemble of trajectories or a linear opinion pool, but not models’ quantile intervals. Ensembles weighted by performance typically retained the range of plausible incidence over time, and in some cases narrowed this by excluding some epidemic shapes. Conclusions. We observed several information gains from collecting modelled trajectories rather than quantile distributions, including potential for continuously updated information from a single model collection. The value of information gains and losses may vary with each collaborative effort’s aims, depending on the needs of projection users. Understanding the differing information potential of methods to collect model projections can support the accuracy, sustainability, and communication of collaborative infectious disease modelling efforts. Data availability All code and data available on Github: https://github.com/covid19-forecast-hub-europe/aggregation-info-loss
The decarbonization of the European energy system demands a rapid and comprehensive transformation while securing energy supplies at all times. Still, natural gas plays a crucial role in this process. Recent unexpected events forced drastic changes in gas routes throughout Europe. Therefore, operational-level analysis of the gas transport networks and technical capacities to cope with these transitions using unconventional scenarios has become essential.
Unfortunately, data limitations often hinder such analyses. To overcome this challenge, we propose a mathematical model-based scenario generator that enables operational analysis of the European gas network using open data. Our approach focuses on the consistent analysis of specific partitions of the gas transport network, whose network topology data is readily available. We generate reproducible and consistent node-based gas in/out-flow scenarios for these defined network partitions to enable feasibility analysis and data quality assessment.
Our proposed method is demonstrated through several applications that address the feasibility analysis and data quality assessment of the German gas transport network. By using open data and a mathematical modeling approach, our method allows for a more comprehensive understanding of the gas transport network's behavior and assists in decision-making during the transition to decarbonization.
Für die Energiesystemforschung sind Software-Modelle ein Kernelement zur Analyse von Szenarien. Das Forschungsprojekt UNSEEN hatte das Ziel eine bisher unerreichte Anzahl an modellbasierten Energieszenarien zu berechnen, um Unsicherheiten – vor allem unter Nutzung linear optimierender Energiesystem-Modelle - besser bewerten zu können. Hierfür wurden umfangreiche Parametervariationen auf Energieszenarien angewendet und das wesentliche methodische Hindernis in diesem Zusammenhang adressiert: die rechnerische Beherrschbarkeit der zu lösenden mathematischen Optimierungsprobleme. Im Vorläuferprojekt BEAM-ME wurde mit der Entwicklung und Anwendung des Open-Source-Lösers PIPS-IPM++ die Grundlage für den Einsatz von High-Performance-Computing (HPC) zur Lösung dieser Modelle gelegt. In UNSEEN war dieser Löser die zentrale Komponente eines Workflows, welcher zur Generierung, Lösung und multi-kriteriellen Bewertung von Energieszenarien auf dem Hochleistungscomputer JUWELS am Forschungszentrum Jülich implementiert wurde. Zur effizienten Generierung und Kommunikation von Modellinstanzen für Methoden der mathematischen Optimierung auf HPC wurde eine weitere Workflow-Komponente von der GAMS Software GmbH entwickelt: der Szenariogenerator. Bei der Weiterentwicklung von Lösungsalgorithmen für linear optimierende Energie-Systemmodelle standen gemischt-ganzzahlige Optimierungsprobleme im Fokus, welche für die Modellierung konkreter Infrastrukturen und Maßnahmen zur Umsetzung der Energiewende gelöst werden müssen. Die in diesem Zusammenhang stehenden Arbeiten zur Entwicklung von Algorithmen wurden von der Technischen Universität Berlin verantwortet. Bei Design und Implementierung dieser Methoden wurde sie vom Zuse Instituts Berlin unterstützt.
Optimized Sensing on Gold Nanoparticles Created by Graded-Layer Magnetron Sputtering and Annealing
(2024)
Task-adapted image reconstruction methods using end-to-end trainable neural networks (NNs) have been proposed to optimize reconstruction for subsequent processing tasks, such as segmentation. However, their training typically requires considerable hardware resources and thus, only relatively simple building blocks, e.g. U-Nets, are typically used, which, albeit powerful, do not integrate model-specific knowledge.
In this work, we extend an end-to-end trainable task-adapted image reconstruction method for a clinically realistic reconstruction and segmentation problem of bone and cartilage in 3D knee MRI by incorporating statistical shape models (SSMs). The SSMs model the prior information and help to regularize the segmentation maps as a final post-processing step.
We compare the proposed method to a state-of-the-art (SOTA) simultaneous multitask learning approach for image reconstruction and segmentation (MTL) and to a complex SSMs-informed segmentation pipeline (SIS).
Our experiments show that the combination of joint end-to-end training and SSMs to further regularize the segmentation maps obtained by MTL highly improves the results, especially in terms of mean and maximal surface errors.
In particular, we achieve the segmentation quality of SIS and, at the same time, a substantial model reduction that yields a five-fold decimation in model parameters and a computational speedup of an order of magnitude.
Remarkably, even for undersampling factors of up to R=8, the obtained segmentation maps are of comparable quality to those obtained by SIS from ground-truth images.
The SCIP Optimization Suite provides a collection of software packages for mathematical optimization, centered around the constraint integer programming framework SCIP. This report discusses the enhancements and extensions included in the SCIP Optimization Suite 9.0. The updates in SCIP 9.0 include improved symmetry handling, additions and improvements of nonlinear handlers and primal heuristics, a new cut generator and two new cut selection schemes, a new branching rule, a new LP interface, and several bug fixes. The SCIP Optimization Suite 9.0 also features new Rust and C++ interfaces for SCIP, new Python interface for SoPlex, along with enhancements to existing interfaces. The SCIP Optimization Suite 9.0 also includes new and improved features in the LP solver SoPlex, the presolving library PaPILO, the parallel framework UG, the decomposition framework GCG, and the SCIP extension SCIP-SDP. These additions and enhancements have resulted in an overall performance improvement of SCIP in terms of solving time, number of nodes in the branch-and-bound tree, as well as the reliability of the solver.
We study the solution of the rolling stock rotation problem with predictive maintenance (RSRP-PdM) by an iterative refinement approach that is based on a state-expanded event-graph. In this graph, the states are parameters of a failure distribution, and paths correspond to vehicle rotations with associated health state approximations. An optimal set of paths including maintenance can be computed by solving an integer linear program. Afterwards, the graph is refined and the procedure repeated. An associated linear program gives rise to a lower bound that can be used to determine the solution quality. Computational results for six instances derived from real-world timetables of a German railway company are presented. The results show the effectiveness of the approach and the quality of the solutions.
Markov processes serve as foundational models in many scientific disciplines,
such as molecular dynamics, and their simulation forms a common basis for
analysis. While simulations produce useful trajectories, obtaining macroscopic
information directly from microstate data presents significant challenges. This
paper addresses this gap by introducing the concept of membership functions
being the macrostates themselves. We derive equations for the holding times of
these macrostates and demonstrate their consistency with the classical definition.
Furthermore, we discuss the application of the ISOKANN method for learning
these quantities from simulation data. In addition, we present a novel method
for extracting transition paths based on the ISOKANN results and demonstrate
its efficacy by applying it to simulations of the 𝜇-opioid receptor. With this
approach we provide a new perspective on analyzing the macroscopic behaviour
of Markov systems.
Estimating the rate of rare conformational changes in molecular systems is one of the goals of molecular dynamics simulations. In the past few decades, a lot of progress has been done in data-based approaches toward this problem. In contrast, model-based methods, such as the Square Root Approximation (SqRA), directly derive these quantities from the potential energy functions. In this article, we demonstrate how the SqRA formalism naturally blends with the tensor structure obtained by coupling multiple systems, resulting in the tensor-based Square Root Approximation (tSqRA). It enables efficient treatment of high-dimensional systems using the SqRA and provides an algebraic expression of the impact of coupling energies between molecular subsystems. Based on the tSqRA, we also develop the projected rate estimation, a hybrid data-model-based algorithm that efficiently estimates the slowest rates for coupled systems. In addition, we investigate the possibility of integrating low-rank approximations within this framework to maximize the potential of the tSqRA.