The 100 most recently published documents
5q-spinal muscular atrophy (SMA) is a neuromuscular disorder (NMD) that has become one of the first 5% treatable rare diseases. The efficacy of new SMA therapies is creating a dynamic SMA patient landscape, where disease progression and scoliosis development play a central role, however, remain difficult to anticipate. New approaches to anticipate disease progression and associated sequelae will be needed to continuously provide these patients the best standard of care. Here we developed an interpretable machine learning (ML) model that can function as an assistive tool in the anticipation of SMA-associated scoliosis based on disease progression markers. We collected longitudinal data from 86 genetically confirmed SMA patients. We selected six features routinely assessed over time to train a random forest classifier. The model achieved a mean accuracy of 0.77 (SD 0.2) and an average ROC AUC of 0.85 (SD 0.17). For class 1 ‘scoliosis’ the average precision was 0.84 (SD 0.11), recall 0.89 (SD 0.22), F1-score of 0.85 (SD 0.17), respectively. Our trained model could predict scoliosis using selected disease progression markers and was consistent with the radiological measurements. During post validation, the model could predict scoliosis in patients who were unseen during training. We also demonstrate that rare disease data sets can be wrangled to build predictive ML models. Interpretable ML models can function as assistive tools in a changing disease landscape and have the potential to democratize expertise that is otherwise clustered at specialized centers.
A major restriction to applying deep learning methods in cryo-electron tomography is the lack of annotated data. Many large learning-based models cannot be applied to these images due to the lack of adequate experimental ground truth. One appealing alternative solution to the time-consuming and expensive experimental data acquisition and annotation is the generation of simulated cryo-ET images. In this context, we exploit a public cryo-ET simulator called PolNet to generate three datasets of two macromolecular structures, namely the ribosomal complex 4v4r and Thermoplasma acidophilum 20S proteasome, 3j9i. We select these two specific particles to test whether our models work for macromolecular structures with and without rotational symmetry. The three datasets contain 50, 150, and 450 tomograms with a voxel size of 10 ̊A, respectively. Here, we publish patches of size 40 × 40 × 40 extracted from the medium-sized dataset with 26,703 samples of 4v4r and 40,671 samples of 3j9i. The original tomograms from which the samples were extracted are of size 500 × 500 × 250. Finally, it should be noted that the currently published test dataset is employed for reporting the results of our paper titled ”DeepOrientation: Deep Orientation Estimation of Macromolecules in Cryo-electron tomography” paper.
Although Virtual Reality (VR) has undoubtedly improved human interaction with 3D data, users still face difficulties retaining important details of complex digital objects in preparation for physical tasks. To address this issue, we evaluated the potential of visuohaptic integration to improve the memorability of virtual objects in immersive visualizations. In a user study (N=20), participants performed a delayed match-to-sample task where they memorized stimuli of visual, haptic, or visuohaptic encoding conditions. We assessed performance differences between the conditions through error rates and response time. We found that visuohaptic encoding significantly improved memorization accuracy compared to unimodal visual and haptic conditions. Our analysis indicates that integrating haptics into immersive visualizations enhances the memorability of digital objects. We discuss its implications for the optimal encoding design in VR applications that assist professionals who need to memorize and recall virtual objects in their daily work.
Gesture recognition is a tool to enable novel interactions with different techniques and
applications, like Mixed Reality and Virtual Reality environments. With all the recent
advancements in gesture recognition from skeletal data, it is still unclear how well state-of-
the-art techniques perform in a scenario using precise motions with two hands. This
paper presents the results of the SHREC 2024 contest organized to evaluate methods
for their recognition of highly similar hand motions using the skeletal spatial coordinate
data of both hands. The task is the recognition of 7 motion classes given their spatial
coordinates in a frame-by-frame motion. The skeletal data has been captured using
a Vicon system and pre-processed into a coordinate system using Blender and Vicon
Shogun Post. We created a small, novel dataset with a high variety of durations in
frames. This paper shows the results of the contest, showing the techniques created
by the 5 research groups on this challenging task and comparing them to our baseline
method.
Time-varying Extremum Graphs
(2024)
We introduce time-varying extremum graph (TVEG), a topological structure to support visualization and analysis of a time- varying scalar field. The extremum graph is a substructure of the Morse-Smale complex. It captures the adjacency relationship between cells in the Morse decomposition of a scalar field. We define the TVEG as a time-varying extension of the extremum graph and demonstrate how it captures salient feature tracks within a dynamic scalar field. We formulate the construction of the TVEG as an optimization problem and describe an algorithm for computing the graph. We also demonstrate the capabilities of TVEG towards identification and exploration of topological events such as deletion, generation, split, and merge within a dynamic scalar field via comprehensive case studies including a viscous fingers and a 3D von Kármán vortex street dataset.
The rise of digital social media has strengthened the coevolution of public opinions and social interactions, that shape social structures and collective outcomes in increasingly complex ways. Existing literature often explores this interplay as a one-directional influence, focusing on how opinions determine social ties within adaptive networks. However, this perspective overlooks the intrinsic dynamics driving social interactions, which can significantly influence how opinions form and evolve. In this work, we address this gap, by introducing the co-evolving opinion and social dynamics using stochastic agent-based models. Agents' mobility in a social space is governed by both their social and opinion similarity with others. Similarly, the dynamics of opinion formation is driven by the opinions of agents in their social vicinity. We analyze the underlying social and opinion interaction networks and explore the mechanisms influencing the appearance of emerging phenomena, like echo chambers and opinion consensus. To illustrate the model's potential for real-world analysis, we apply it to General Social Survey data on political identity and public opinion regarding governmental issues. Our findings highlight the model's strength in capturing the coevolution of social connections and individual opinions over time.
Generating simulated training data needed for constructing sufficiently accurate surrogate models to be used for efficient optimization or parameter identification can incur a huge computational effort in the offline phase. We consider a fully adaptive greedy approach to the computational design of experiments problem using gradient-enhanced Gaussian process regression as surrogates. Designs are incrementally defined by solving an optimization problem for accuracy given a certain computational budget. We address not only the choice of evaluation points but also of required simulation accuracy, both of values and gradients of the forward model.
Numerical results show a significant reduction of the computational effort compared to just position-adaptive and static designs as well as a clear benefit of including gradient information into the surrogate training.
Respiratory viral infections (RVIs) are common reasons for healthcare consultations. The inpatient management of RVIs consumes significant resources. From 2009 to 2014, we assessed the costs of RVI management in 4776 hospitalized children aged 0–18 years participating in a quality improvement program, where all ILI patients underwent virologic testing at the National Reference Centre followed by detailed recording of their clinical course. The direct (medical or non-medical) and indirect costs of inpatient management outside the ICU (‘non-ICU’) versus management requiring ICU care (‘ICU’) added up to EUR 2767.14 (non-ICU) vs. EUR 29,941.71 (ICU) for influenza, EUR 2713.14 (non-ICU) vs. EUR 16,951.06 (ICU) for RSV infections, and EUR 2767.33 (non-ICU) vs. EUR 14,394.02 (ICU) for human rhinovirus (hRV) infections, respectively. Non-ICU inpatient costs were similar for all eight RVIs studied: influenza, RSV, hRV, adenovirus (hAdV), metapneumovirus (hMPV), parainfluenza virus (hPIV), bocavirus (hBoV), and seasonal coronavirus (hCoV) infections. ICU costs for influenza, however, exceeded all other RVIs. At the time of the study, influenza was the only RVI with antiviral treatment options available for children, but only 9.8% of influenza patients (non-ICU) and 1.5% of ICU patients with influenza received antivirals; only 2.9% were vaccinated. Future studies should investigate the economic impact of treatment and prevention of influenza, COVID-19, and RSV post vaccine introduction.
Derivative-based iterative methods for nonlinearly constrained non-convex optimization usually share common algorithmic components, such as strategies for computing a descent direction and mechanisms that promote global convergence. Based on this observation, we introduce an abstract framework based on four common ingredients that describes most derivative-based iterative methods and unifies their workflows. We then present Uno, a modular C++ solver that implements our abstract framework and allows the automatic generation of various strategy combinations with no programming effort from the user. Uno is meant to (1) organize mathematical optimization strategies into a coherent hierarchy; (2) offer a wide range of efficient and robust methods that can be compared for a given instance; (3) enable researchers to experiment with novel optimization strategies; and (4) reduce the cost of development and maintenance of multiple optimization solvers. Uno's software design allows user to compose new customized solvers for emerging optimization areas such as robust optimization or optimization problems with complementarity constraints, while building on reliable nonlinear optimization techniques. We demonstrate that Uno is highly competitive against state-of-the-art solvers filterSQP, IPOPT, SNOPT, MINOS, LANCELOT, LOQO, and CONOPT on a subset of 429 small problems from the CUTEst collection. Uno is available as open-source software under the MIT license at https://github.com/cvanaret/Uno .
Modelling luminescent coupling in multi-junction solar cells: perovskite silicon tandem case study
(2024)
Machine-learning driven design of metasurfaces: learn the physics and not the objective function
(2024)
Analytical approximations of the macroscopic behavior of agent-based models (e.g.
via mean-field theory) often introduce a significant error, especially in the transient phase. For an example model called continuous-time noisy voter model, we use two data-driven approaches to learn the evolution of collective variables instead. The first approach utilizes the SINDy method to approximate the macroscopic dynamics without prior knowledge, but has proven itself to be not particularly robust. The second approach employs an informed learning strategy which includes knowledge about the agent-based model. Both approaches exhibit a considerably smaller error than the conventional analytical approximation.
How many mutually non-attacking queens can be placed on a d-dimensional chessboard of size n? The n-queens problem in higher dimensions is a generalization of the well-known n-queens problem. We provide a comprehensive overview of theoretical results, bounds, solution methods, and the interconnectivity of the problem within topics of discrete optimization and combinatorics. We present an integer programming formulation of the n-queens problem in higher dimensions and several strengthenings through additional valid inequalities. Compared to recent benchmarks, we achieve a speedup in computational time between 15-70x over all instances of the integer programs. Our computational results prove optimality of certificates for several large instances. Breaking additional, previously unsolved instances with the proposed methods is likely possible. On the primal side, we further discuss heuristic approaches to constructing solutions that turn out to be optimal when compared to the IP. We conclude with preliminary results on the number and density of the solutions.
Any sports competition needs a timetable, specifying when and where teams meet each other. The recent International Timetabling Competition (ITC2021) on sports timetabling showed that, although it is possible to develop general algorithms, the performance of each algorithm varies considerably over the problem instances. This paper provides a problem type analysis for sports timetabling, resulting in powerful insights into the strengths and weaknesses of eight state-of-the-art algorithms. Based on machine learning techniques, we propose an algorithm selection system that predicts which algorithm is likely to perform best based on the type of competition and constraints being used (i.e., the problem type) in a given sports timetabling problem instance. Furthermore, we visualize how the problem type relates to algorithm performance, providing insights and possibilities to further enhance several algorithms. Finally, we assess the empirical hardness of the instances. Our results are based on large computational experiments involving about 50 years of CPU time on more than 500 newly generated problem instances.
The Jacobi set of a bivariate scalar field is the set of points where the gradients of the two constituent scalar fields align with each other. It captures the regions of topological changes in the bivariate field. The Jacobi set is a bivariate analog of critical points, and may correspond to features of interest. In the specific case of time-varying fields and when one of the scalar fields is time, the Jacobi set corresponds to temporal tracks of critical points, and serves as a feature-tracking graph. The Jacobi set of a bivariate field or a time-varying scalar field is complex, resulting in cluttered visualizations that are difficult to analyze. This paper addresses the problem of Jacobi set simplification. Specifically, we use the time-varying scalar field scenario to introduce a method that computes a reduced Jacobi set. The method is based on a stability measure called robustness that was originally developed for vector fields and helps capture the structural stability of critical points. We also present a mathematical analysis for the method, and describe an implementation for 2D time-varying scalar fields. Applications to both synthetic and real-world datasets demonstrate the effectiveness of the method for tracking features.
Computing an optimal cycle in a given homology class, also referred to as the homology localization problem, is known to be an NP-hard problem in general. Furthermore, there is currently no known optimality criterion that localizes classes geometrically and admits a stability property under the setting of persistent homology. We present a geometric optimization of the cycles that is computable in polynomial time and is stable in an approximate sense. Tailoring our search criterion to different settings, we obtain various optimization problems like optimal homologous cycle, minimum homology basis, and minimum persistent
homology basis. In practice, the (trivial) exact algorithm is computationally expensive despite having a worst case polynomial runtime. Therefore, we design approximation algorithms for the above problems and study their performance experimentally. These algorithms have reasonable runtimes for moderate sized datasets and the cycles computed by these algorithms
are consistently of high quality as demonstrated via experiments on multiple datasets.
The Bay of Bengal (BoB) has maintained its salinity distribution over the years despite a continuous flow of fresh water entering it through rivers on the northern coast, which is capable of diluting the salinity. This can be attributed to the cyclic flow of high salinity water (>= 35 psu), coming from Arabian sea and entering BoB from the south, which moves northward and mixes with this fresh water. The movement of this high salinity water has been studied and analyzed in previous work (Singh et al., 2022). This paper extends the computational methods and analysis of salinity movement. Specifically, we introduce an advection based feature definition that represents the movement of high salinity water, and describe algorithms to track their evolution over time. This method allows us to trace the movement of high salinity water caused due to ocean currents. The method is validated via comparison with established observations on the flow of high salinity water in the BoB, including its entry from the Arabian Sea and its movement near Sri Lanka. Further, the visual analysis and tracking framework enables us to compare it with previous work and analyze the contribution of advection to salinity transport.
Studying neural mechanisms in complementary model organisms from different ecological niches in the same animal class can leverage the comparative brain analysis at the cellular level. To advance such a direction, we developed a unified brain atlas platform and specialized tools that allowed us to quantitatively compare neural structures in two teleost larvae, medaka (Oryzias latipes) and zebrafish (Danio rerio). Leveraging this quantitative approach we found that most brain regions are similar but some subpopulations are unique in each species. Specifically, we confirmed the existence of a clear dorsal pallial region in the telencephalon in medaka lacking in zebrafish. Further, our approach allows for extraction of differentially expressed genes in both species, and for quantitative comparison of neural activity at cellular resolution. The web-based and interactive nature of this atlas platform will facilitate the teleost community’s research and its easy extensibility will encourage contributions to its continuous expansion.
For industries like the cement industry, switching to a carbon-neutral production process is impossible. They must rely on carbon capture, utilization and storage (CCUS) technologies to reduce their production processes’ inevitable carbon dioxide (CO2) emissions. For transporting continuously large amounts of CO2, utilizing a pipeline network is the most effective solution; however, building such a network is expensive. Therefore minimizing the cost of the pipelines to be built is extremely important to make the operation financially feasible. In this context, we investigate the problem of finding optimal pipeline diameters from a discrete set of diameters for a tree-shaped network transporting captured CO2 from multiple sources to a single sink. The general problem of optimizing arc capacities in potential-based fluid networks is already a challenging mixed-integer nonlinear optimization problem. The problem becomes even more complex when adding the highly sensitive nonlinear behavior of CO2 regarding temperature and pressure changes. We propose an iterative algorithm splitting the problem into two parts: a) the pipe-sizing problem under a fixed supply scenario and temperature distribution and b) the thermophysical modeling, including mixing effects, the Joule-Thomson effect, and heat exchange with the surrounding environment. We demonstrate the effectiveness of our approach by applying our algorithm to a real-world network planning problem for a CO2 network in Western Germany. Further, we show the robustness of the algorithm by solving a large artificially created set of network instances.
The landscape of applications and subroutines relying on shortest path computations continues to grow steadily. This growth is driven by the undeniable success of shortest path algorithms in theory and practice. It also introduces new challenges as the models and assessing the optimality of paths become more complicated. Hence, multiple recent publications in the field adapt existing labeling methods in an ad hoc fashion to their specific problem variant without considering the underlying general structure: they always deal with multi-criteria scenarios, and those criteria define different partial orders on the paths. In this paper, we introduce the partial order shortest path problem (POSP), a generalization of the multi-objective shortest path problem (MOSP) and in turn also of the classical shortest path problem. POSP captures the particular structure of many shortest path applications as special cases. In this generality, we study optimality conditions or the lack of them, depending on the objective functions’ properties. Our final contribution is a big lookup table summarizing our findings and providing the reader with an easy way to choose among the most recent multi-criteria shortest path algorithms depending on their problems’ weight structure. Examples range from time-dependent shortest path and bottleneck path problems to the electric vehicle shortest path problem with recharging and complex financial weight functions studied in the public transportation community. Our results hold for general digraphs and, therefore, surpass previous generalizations that were limited to acyclic graphs.
The investigation of energy transition paths toward a sustainable and decarbonized future under uncertainty is a critical aspect of contemporary energy planning and policy development. There are numerous methods for analysing uncertainties and sensitivities and many studies on sustainable transformation paths, but there is a lack of combined application to relevant use-cases.
In this study, we investigate the sensitivity of energy transition paths to uncertainties in operational and investment costs of power plants in the metropolitan area of Berlin and its rural surroundings.
By employing the linear programming energy system model oemof-B3, we extensively focus on the system's energy technologies, such as wind turbines, photovoltaics, hydro and combustion plants, and energy storages. Greenhouse gas reduction and electrification rates per commodity are realized by selected constraints.
Our research aims to discern how investments in energy production capacities are influenced by uncertainties of other energy technologies' investment and operational costs in the system. We apply a quantitative approach to investigate such interdependencies of cost variations and their impact on long-term energy planning. Thus, the analysis sheds light on the robustness of energy transition paths in the face of these uncertainties.
The region Berlin-Brandenburg serves as a case study and thus reflects on the present space conflicts to meet energy demands in urban and suburban areas and their rural surroundings. An electricity-intensive scenario is selected that assumes a 100 % reduction in greenhouse gas emissions by 2050. With the results of the case study, we show how our approach enables rural and metropolitan decision-makers to collaborate in achieving sustainable energy.
Decision-making in long-term energy planning can be made more robust and flexible by acknowledging the identified sensitivities and enable such regions better to navigate challenges and uncertainties associated with sustainable energy planning.
Compressible flows are prevalent in natural and technological processes, particularly in the energy transition to renewable energy systems. Consequently, extensive research has focused on understanding the stability of tangential--velocity discontinuity in compressible media. Despite recent advancements that address industrial challenges more realistically, many studies have ignored viscous stress tensors' impact, leading to inaccuracies in predicting interface stability. This omission becomes critical, especially in high Reynolds or low Mach number flows, where viscous forces dissipate kinetic energy across interfaces, affect total energy dissipation, and dampen flow instabilities. Our work is thus motivated to analyze the viscosity force effect by including the viscous stress tensor terms in the motion equations. Our results show that by considering the effect of viscous forces, the tangential-velocity discontinuity interface is constantly destabilized for the entire range of the Mach number.
We study a complex planning and scheduling problem arising from the build-up process of air cargo pallets and containers, collectively referred to as unit load devices (ULD), in which ULDs must be assigned to workstations for loading. Since air freight usually becomes available gradually along the planning horizon, ULD build-ups must be scheduled neither too early to avoid underutilizing ULD capacity, nor too late to avoid resource conflicts with other flights. Whenever possible, ULDs should be built up in batches, thereby giving ground handlers more freedom to rearrange cargo and utilize the ULD's capacity efficiently. The resulting scheduling problem has an intricate cost function and produces large time-expanded models, especially for longer planning horizons. We propose a logic-based Benders decomposition approach that assigns batches to time intervals and workstations in the master problem, while the actual schedule is decided in a subproblem. By choosing appropriate intervals, the subproblem becomes a feasibility problem that decomposes over the workstations. Additionally, the similarity of many batches is exploited by a strengthening procedure for no-good cuts. We benchmark our approach against a time-expanded MIP formulation from the literature on a publicly available data set. It solves 15% more instances to optimality and decreases run times by more than 50% in the geometric mean. This improvement is especially pronounced for longer planning horizons of up to one week, where the Benders approach solves over 50% instances more than the baseline
The imperative to decarbonize energy systems has intensified the need for efficient transformations within the heating sector, with a particular focus on district heating networks. This study addresses this challenge by proposing a comprehensive optimization approach evaluated on the district heating
network of the Märkisches Viertel of Berlin. Our objective is to simultaneously optimize heat production with three targets: minimizing costs, minimizing CO2-emissions, and maximizing heat generation from Combined Heat and Power (CHP) plants for enhanced efficiency.
To tackle this optimization problem, we employed a Mixed-Integer Linear Program (MILP) that encompasses the conversion of various fuels into heat and power, integration with relevant markets, and considerations for technical constraints on power plant operation. These constraints include startup
and minimum downtime, activation costs, and storage limits. The ultimate goal is to delineate the Pareto front, representing the optimal trade-offs between the three targets. We evaluate variants of the 𝜖-constraint algorithm for their effectiveness in coordinating these objectives, with a simultaneous focus on the quality of the estimated Pareto front and computational efficiency. One algorithm explores solutions on an evenly spaced grid in the objective space, while another dynamically adjusts the grid based on identified solutions. Initial findings highlight the strengths and limitations of each algorithm, providing guidance on algorithm selection depending on desired outcomes and computational constraints.
Our study emphasizes that the optimal choice of algorithm hinges on the density and distribution of solutions in the feasible space. Whether solutions are clustered or evenly distributed significantly influences algorithm performance. These insights contribute to a nuanced understanding of algorithm selection for multi-objective multi-energy system optimization, offering valuable guidance for future research and practical applications for planning sustainable district heating networks.
Modeling-Simulation-Optimization workflows play a fundamental role in applied mathematics. The Mathematical Research Data Initiative, MaRDI, responded to this by developing a FAIR and machine-interpretable template for a comprehensive documentation of such workflows. MaRDMO, a Plugin for the Research Data Management Organiser, enables scientists from diverse fields to document and publish their workflows on the MaRDI Portal seamlessly using the MaRDI template. Central to these workflows are mathematical models. MaRDI addresses them with the MathModDB ontology, offering a structured formal model description. Here, we showcase the interaction between MaRDMO and the MathModDB Knowledge Graph through an algebraic modeling workflow from the Digital Humanities. This demonstration underscores the versatility of both services beyond their original numerical domain.
At present, data management plans (DMPs) are still often perceived as mere documents for funding agencies providing clarity on how research data will be handled during a funded project, but are not usually actively involved in the processes. However, they contain a great deal of information that can be shared automatically to facilitate active research data management (RDM) by providing metadata to research infrastructures and supporting communication between all involved stakeholders. This position paper brings together a number of ideas developed and collected during interdisciplinary workshops of the Data Management Planning Working Group (infra-dmp), which is part of the section Common Infrastructures of the National Research Data Infrastructure (NFDI) in Germany. We present our vision of a possible future role of DMPs, templates, and tools in the upcoming NFDI service architecture.
In applied mathematics and related disciplines, the modeling-simulation-optimization workflow is a prominent scheme, with mathematical models and numerical algorithms playing a crucial role. For these types of mathematical research data, the Mathematical Research Data Initiative has developed, merged and implemented ontologies and knowledge graphs. This contributes to making mathematical research data FAIR by introducing semantic technology and documenting the mathematical foundations accordingly. Using the concrete example of microfracture analysis of porous media, it is shown how the knowledge of the underlying mathematical model and the corresponding numerical algorithms for its solution can be represented by the ontologies.
Ontologies and knowledge graphs for mathematical algorithms and models are presented, that have been developed by the Mathematical Research Data Initiative. This enables FAIR data handling in mathematics and the applied disciplines. Moreover, challenges of harmonization during the ontology development are discussed.
MaRDMO Plugin
(2023)
MaRDMO, a plugin for the Research Data Management Organiser, was developed in the Mathematical Research Data Initiative to document interdisciplinary workflows using a standardised scheme. Interdisciplinary workflows recorded this way are published directly on the MaRDI portal. In addition, central information is integrated into the MaRDI knowledge graph. Next to the documentation, MaRDMO offers the possibility to retrieve existing interdisciplinary workflows from the MaRDI Knowledge Graph to allow the reproduction of the initial work and to provide scientists with new researchimpulses. Thus, MaRDMO creates a community-driven knowledge loop that could help to overcome the replication crisis.
Research data are crucial in mathematics and all scientific disciplines, as they form the
foundation for empirical evidence, by enabling the validation and reproducibility of scientific findings. Mathematical research data (MathRD) have become vast and complex, and their interdisciplinary potential and abstract nature make them ubiquitous in various scientific fields. The volume of data and the velocity of its creation are rapidly increasing due to advancements in data science and computing power. This complexity extends to other disciplines, resulting in diverse research data and computational models. Thus, proper handling of research data is crucial both within mathematics and for its manifold connections and exchange with other disciplines. The National Research Data Infrastructure (NFDI), funded by the federal and state governments of Germany, consists of discipline-oriented consortia, including the Mathematical Research Data Initiative (MaRDI). MaRDI has been established to develop services, guidelines and outreach measures for all aspects of MathRD, and thus support the mathematical research community. Research data management (RDM) should be an integral component of every scientific project, and is becoming a mandatory component of grants with funding bodies such as the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation). At the core of RDM are the FAIR (Findable, Accessible, Interoperable, and Reusable) principles. This document aims to guide mathematicians and researchers from related disciplines who create RDM plans. It highlights the benefits and opportunities of RDM in mathematics and interdisciplinary studies, showcases examples of diverse MathRD, and suggests technical solutions that meet the requirements of funding agencies with specific examples. The document is regularly updated to reflect the latest developments within the mathematical community represented by MaRDI.
Voronoi Graph - Improved raycasting and integration schemes for high dimensional Voronoi diagrams
(2024)
The computation of Voronoi Diagrams, or their dual Delauney triangulations is difficult in high dimensions. In a recent publication Polianskii and Pokorny propose an iterative randomized algorithm facilitating the approximation of Voronoi tesselations in high dimensions. In this paper, we provide an improved vertex search method that is not only exact but even faster than the bisection method that was previously recommended. Building on this we also provide a depth-first graph-traversal algorithm which allows us to compute the entire Voronoi diagram. This enables us to compare the outcomes with those of classical algorithms like qHull, which we either match or marginally beat in terms of computation time. We furthermore show how the raycasting algorithm naturally lends to a Monte Carlo approximation for the volume and boundary integrals of the Voronoi cells, both of which are of importance for finite Volume methods. We compare the Monte-Carlo methods to the exact polygonal integration, as well as a hybrid approximation scheme.
Introduction and application of a new approach for model-based optical bidirectional measurements
(2024)
The Koopman operator has entered and transformed many research areas over the last years. Although the underlying concept–representing highly nonlinear dynamical systems by infinite-dimensional linear operators–has been known for a long time, the availability of large data sets and efficient machine learning algorithms for estimating the Koopman operator from data make this framework extremely powerful and popular. Koopman operator theory allows us to gain insights into the characteristic global properties of a system without requiring detailed mathematical models. We will show how these methods can also be used to analyze complex networks and highlight relationships between Koopman operators and graph Laplacians.
This paper is concerned with collective variables, or reaction coordinates, that map a discrete-in-time Markov process X_n in R^d to a (much) smaller dimension k≪d. We define the effective dynamics under a given collective variable map ξ as the best Markovian representation of X_n under ξ. The novelty of the paper is that it gives strict criteria for selecting optimal collective variables via the properties of the effective dynamics. In particular, we show that the transition density of the effective dynamics of the optimal collective variable solves a relative entropy minimization problem from certain family of densities to the transition density of X_n. We also show that many transfer operator-based data-driven numerical approaches essentially learn quantities of the effective dynamics. Furthermore, we obtain various error estimates for the effective dynamics in approximating dominant timescales / eigenvalues and transition rates of the original process X_n and how optimal collective variables minimize these errors. Our results contribute to the development of theoretical tools for the understanding of complex dynamical systems, e.g. molecular kinetics, on large timescales. These results shed light on the relations among existing data-driven numerical approaches for identifying good collective variables, and they also motivate the development of new methods.
The multi-grid reaction-diffusion master equation (mgRDME) provides a generalization of stochastic compartment-based reaction-diffusion modelling described by the standard reaction-diffusion master equation (RDME). By enabling different resolutions on lattices for biochemical species with different diffusion constants, the mgRDME approach improves both accuracy and efficiency of compartment-based reaction-diffusion simulations. The mgRDME framework is examined through its application to morphogen gradient formation in stochastic reaction-diffusion scenarios, using both an analytically tractable first-order reaction network and a model with a second-order reaction. The results obtained by the mgRDME modelling are compared with the standard RDME model and with the (more detailed) particle-based Brownian dynamics simulations. The dependence of error and numerical cost on the compartment sizes is defined and investigated through a multi-objective optimization problem.
Existing planning approaches for onshore wind farm siting and grid integration often do not meet minimum cost solutions or social and environmental considerations. In this paper, we develop an exact approach for the integrated layout and cable routing problem of onshore wind farm planning using the Quota Steiner tree problem. Applying a novel transformation on a known directed cut formulation, reduction techniques, and heuristics, we design an exact solver that makes large problem instances solvable and outperforms generic MIP solvers. In selected regions of Germany, the trade-offs between minimizing costs and landscape impact of onshore wind farm siting are investigated. Although our case studies show large trade-offs between the objective criteria of cost and landscape impact, small burdens on one criterion can significantly improve the other criteria. In addition, we demonstrate that contrary to many approaches for exclusive turbine siting, grid integration must be simultaneously optimized to avoid excessive costs or landscape impacts in the course of a wind farm project. Our novel problem formulation and the developed solver can assist planners in decision-making and help optimize wind farms in large regions in the future.
AI-guided pipeline for protein–protein interaction drug discovery identifies a SARS-CoV-2 inhibitor
(2024)
Protein–protein interactions (PPIs) offer great opportunities to expand the druggable proteome and therapeutically tackle various diseases, but remain challenging targets for drug discovery. Here, we provide a comprehensive pipeline that combines experimental and computational tools to identify and validate PPI targets and perform early-stage drug discovery. We have developed a machine learning approach that prioritizes interactions by analyzing quantitative data from binary PPI assays or AlphaFold-Multimer predictions. Using the quantitative assay LuTHy together with our machine learning algorithm, we identified high-confidence interactions among SARS-CoV-2 proteins for which we predicted three-dimensional structures using AlphaFold-Multimer. We employed VirtualFlow to target the contact interface of the NSP10-NSP16 SARS-CoV-2 methyltransferase complex by ultra-large virtual drug screening. Thereby, we identified a compound that binds to NSP10 and inhibits its interaction with NSP16, while also disrupting the methyltransferase activity of the complex, and SARS-CoV-2 replication. Overall, this pipeline will help to prioritize PPI targets to accelerate the discovery of early-stage drug candidates targeting protein complexes and pathways.
We present a heuristic solution approach for the rolling stock rotation problem with predictive maintenance (RSRP-PdM). The task of this problem is to assign a sequence of trips to each of the vehicles and to schedule their maintenance such that all trips can be operated. Here, the health states of the vehicles are considered to be random variables distributed by a family of probability distribution functions, and the maintenance services should be scheduled based on the failure probability of the vehicles. The proposed algorithm first generates a solution by solving an integer linear program and then heuristically improves this solution by applying a local search procedure. For this purpose, the trips assigned to the vehicles are split up and recombined, whereby additional deadhead trips can be inserted between the partial assignments. Subsequently, the maintenance is scheduled by solving a shortest path problem in a state-expanded version of a space-time graph restricted to the trips of the individual vehicles. The solution approach is tested and evaluated on a set of test instances based on real-world timetables.
An Iterative Refinement Approach for the Rolling Stock Rotation Problem with Predictive Maintenance
(2024)
The rolling stock rotation problem with predictive maintenance (RSRP-PdM) involves the assignment of trips to a fleet of vehicles with integrated maintenance scheduling based on the predicted failure probability of the vehicles. These probabilities are determined by the health states of the vehicles, which are considered to be random variables distributed by a parameterized family of probability distribution functions. During the operation of the trips, the corresponding parameters get updated. In this article, we present a dual solution approach for RSRP-PdM and generalize a linear programming based lower bound for this problem to families of probability distribution functions with more than one parameter. For this purpose, we define a rounding function that allows for a consistent underestimation of the parameters and model the problem by a state-expanded event-graph in which the possible states are restricted to a discrete set. This induces a flow problem that is solved by an integer linear program. We show that the iterative refinement of the underlying discretization leads to solutions that converge from below to an optimal solution of the original instance. Thus, the linear relaxation of the considered integer linear program results in a lower bound for RSRP-PdM. Finally, we report on the results of computational experiments conducted on a library of test instances.
Die Metadatenqualität bestimmt wesentlich den Nutzen und Wert von Kulturerbedaten. ‚Gute‘ Metadaten erhöhen die Auffindbarkeit, Interoperabilität und Nutzbarkeit von Daten signifikant. Mit Blick auf Retrieval bzw. Discovery, Vernetzung im Kontext von Linked Open Data und
wissenschaftliches Data Mining hängt die Qualität dabei wesentlich von der Verwendung von maschinenlesbaren kontrollierten Vokabularen ab. Diese wird in der vorliegenden Arbeit quantitativ untersucht. Als Datengrundlage dienen die in der Deutschen Digitalen Bibliothek aggregierten Metadaten aus Berliner Museen (ca. 1,2 Millionen Metadatenobjekte im LIDO-Format)
Collaborative comparisons and combinations of epidemic models are used as policy-relevant evidence during epidemic outbreaks. In the process of collecting multiple model projections, such collaborations may gain or lose relevant information. Typically, modellers contribute a probabilistic summary at each time-step. We compared this to directly collecting simulated trajectories. We aimed to explore information on key epidemic quantities; ensemble uncertainty; and performance against data, investigating potential to continuously gain information from a single cross-sectional collection of model results. Methods We compared July 2022 projections from the European COVID-19 Scenario Modelling Hub. Five modelling teams projected incidence in Belgium, the Netherlands, and Spain. We compared projections by incidence, peaks, and cumulative totals. We created a probabilistic ensemble drawn from all trajectories, and compared to ensembles from a median across each model’s quantiles, or a linear opinion pool. We measured the predictive accuracy of individual trajectories against observations, using this in a weighted ensemble. We repeated this sequentially against increasing weeks of observed data. We evaluated these ensembles to reflect performance with varying observed data. Results. By collecting modelled trajectories, we showed policy-relevant epidemic characteristics. Trajectories contained a right-skewed distribution well represented by an ensemble of trajectories or a linear opinion pool, but not models’ quantile intervals. Ensembles weighted by performance typically retained the range of plausible incidence over time, and in some cases narrowed this by excluding some epidemic shapes. Conclusions. We observed several information gains from collecting modelled trajectories rather than quantile distributions, including potential for continuously updated information from a single model collection. The value of information gains and losses may vary with each collaborative effort’s aims, depending on the needs of projection users. Understanding the differing information potential of methods to collect model projections can support the accuracy, sustainability, and communication of collaborative infectious disease modelling efforts. Data availability All code and data available on Github: https://github.com/covid19-forecast-hub-europe/aggregation-info-loss
This paper proposes an almost feasible Sequential Linear Programming (afSLP) algorithm. In the first part, the practical limitations of previously proposed Feasible Sequential Linear Programming (FSLP) methods are discussed along with illustrative examples. Then, we present a generalization of FSLP based on a tolerance-tube method that addresses the shortcomings of FSLP. The proposed algorithm afSLP consists of two phases. Phase I starts from random infeasible points and iterates towards a relaxation of the feasible set. Once the tolerance-tube around the feasible set is reached, phase II is started and all future iterates are kept within the tolerance-tube. The novel method includes enhancements to the originally proposed tolerance-tube method that are necessary for global convergence. afSLP is shown to outperform FSLP and the state-of-the-art solver IPOPT on a SCARA robot optimization problem.
The decarbonization of the European energy system demands a rapid and comprehensive transformation while securing energy supplies at all times. Still, natural gas plays a crucial role in this process. Recent unexpected events forced drastic changes in gas routes throughout Europe. Therefore, operational-level analysis of the gas transport networks and technical capacities to cope with these transitions using unconventional scenarios has become essential.
Unfortunately, data limitations often hinder such analyses. To overcome this challenge, we propose a mathematical model-based scenario generator that enables operational analysis of the European gas network using open data. Our approach focuses on the consistent analysis of specific partitions of the gas transport network, whose network topology data is readily available. We generate reproducible and consistent node-based gas in/out-flow scenarios for these defined network partitions to enable feasibility analysis and data quality assessment.
Our proposed method is demonstrated through several applications that address the feasibility analysis and data quality assessment of the German gas transport network. By using open data and a mathematical modeling approach, our method allows for a more comprehensive understanding of the gas transport network's behavior and assists in decision-making during the transition to decarbonization.
Für die Energiesystemforschung sind Software-Modelle ein Kernelement zur Analyse von Szenarien. Das Forschungsprojekt UNSEEN hatte das Ziel eine bisher unerreichte Anzahl an modellbasierten Energieszenarien zu berechnen, um Unsicherheiten – vor allem unter Nutzung linear optimierender Energiesystem-Modelle - besser bewerten zu können. Hierfür wurden umfangreiche Parametervariationen auf Energieszenarien angewendet und das wesentliche methodische Hindernis in diesem Zusammenhang adressiert: die rechnerische Beherrschbarkeit der zu lösenden mathematischen Optimierungsprobleme. Im Vorläuferprojekt BEAM-ME wurde mit der Entwicklung und Anwendung des Open-Source-Lösers PIPS-IPM++ die Grundlage für den Einsatz von High-Performance-Computing (HPC) zur Lösung dieser Modelle gelegt. In UNSEEN war dieser Löser die zentrale Komponente eines Workflows, welcher zur Generierung, Lösung und multi-kriteriellen Bewertung von Energieszenarien auf dem Hochleistungscomputer JUWELS am Forschungszentrum Jülich implementiert wurde. Zur effizienten Generierung und Kommunikation von Modellinstanzen für Methoden der mathematischen Optimierung auf HPC wurde eine weitere Workflow-Komponente von der GAMS Software GmbH entwickelt: der Szenariogenerator. Bei der Weiterentwicklung von Lösungsalgorithmen für linear optimierende Energie-Systemmodelle standen gemischt-ganzzahlige Optimierungsprobleme im Fokus, welche für die Modellierung konkreter Infrastrukturen und Maßnahmen zur Umsetzung der Energiewende gelöst werden müssen. Die in diesem Zusammenhang stehenden Arbeiten zur Entwicklung von Algorithmen wurden von der Technischen Universität Berlin verantwortet. Bei Design und Implementierung dieser Methoden wurde sie vom Zuse Instituts Berlin unterstützt.
Optimized Sensing on Gold Nanoparticles Created by Graded-Layer Magnetron Sputtering and Annealing
(2024)
Task-adapted image reconstruction methods using end-to-end trainable neural networks (NNs) have been proposed to optimize reconstruction for subsequent processing tasks, such as segmentation. However, their training typically requires considerable hardware resources and thus, only relatively simple building blocks, e.g. U-Nets, are typically used, which, albeit powerful, do not integrate model-specific knowledge.
In this work, we extend an end-to-end trainable task-adapted image reconstruction method for a clinically realistic reconstruction and segmentation problem of bone and cartilage in 3D knee MRI by incorporating statistical shape models (SSMs). The SSMs model the prior information and help to regularize the segmentation maps as a final post-processing step.
We compare the proposed method to a state-of-the-art (SOTA) simultaneous multitask learning approach for image reconstruction and segmentation (MTL) and to a complex SSMs-informed segmentation pipeline (SIS).
Our experiments show that the combination of joint end-to-end training and SSMs to further regularize the segmentation maps obtained by MTL highly improves the results, especially in terms of mean and maximal surface errors.
In particular, we achieve the segmentation quality of SIS and, at the same time, a substantial model reduction that yields a five-fold decimation in model parameters and a computational speedup of an order of magnitude.
Remarkably, even for undersampling factors of up to R=8, the obtained segmentation maps are of comparable quality to those obtained by SIS from ground-truth images.
The SCIP Optimization Suite provides a collection of software packages for mathematical optimization, centered around the constraint integer programming framework SCIP. This report discusses the enhancements and extensions included in the SCIP Optimization Suite 9.0. The updates in SCIP 9.0 include improved symmetry handling, additions and improvements of nonlinear handlers and primal heuristics, a new cut generator and two new cut selection schemes, a new branching rule, a new LP interface, and several bug fixes. The SCIP Optimization Suite 9.0 also features new Rust and C++ interfaces for SCIP, new Python interface for SoPlex, along with enhancements to existing interfaces. The SCIP Optimization Suite 9.0 also includes new and improved features in the LP solver SoPlex, the presolving library PaPILO, the parallel framework UG, the decomposition framework GCG, and the SCIP extension SCIP-SDP. These additions and enhancements have resulted in an overall performance improvement of SCIP in terms of solving time, number of nodes in the branch-and-bound tree, as well as the reliability of the solver.
We study the solution of the rolling stock rotation problem with predictive maintenance (RSRP-PdM) by an iterative refinement approach that is based on a state-expanded event-graph. In this graph, the states are parameters of a failure distribution, and paths correspond to vehicle rotations with associated health state approximations. An optimal set of paths including maintenance can be computed by solving an integer linear program. Afterwards, the graph is refined and the procedure repeated. An associated linear program gives rise to a lower bound that can be used to determine the solution quality. Computational results for six instances derived from real-world timetables of a German railway company are presented. The results show the effectiveness of the approach and the quality of the solutions.
Markov processes serve as foundational models in many scientific disciplines,
such as molecular dynamics, and their simulation forms a common basis for
analysis. While simulations produce useful trajectories, obtaining macroscopic
information directly from microstate data presents significant challenges. This
paper addresses this gap by introducing the concept of membership functions
being the macrostates themselves. We derive equations for the holding times of
these macrostates and demonstrate their consistency with the classical definition.
Furthermore, we discuss the application of the ISOKANN method for learning
these quantities from simulation data. In addition, we present a novel method
for extracting transition paths based on the ISOKANN results and demonstrate
its efficacy by applying it to simulations of the 𝜇-opioid receptor. With this
approach we provide a new perspective on analyzing the macroscopic behaviour
of Markov systems.
Estimating the rate of rare conformational changes in molecular systems is one of the goals of molecular dynamics simulations. In the past few decades, a lot of progress has been done in data-based approaches toward this problem. In contrast, model-based methods, such as the Square Root Approximation (SqRA), directly derive these quantities from the potential energy functions. In this article, we demonstrate how the SqRA formalism naturally blends with the tensor structure obtained by coupling multiple systems, resulting in the tensor-based Square Root Approximation (tSqRA). It enables efficient treatment of high-dimensional systems using the SqRA and provides an algebraic expression of the impact of coupling energies between molecular subsystems. Based on the tSqRA, we also develop the projected rate estimation, a hybrid data-model-based algorithm that efficiently estimates the slowest rates for coupled systems. In addition, we investigate the possibility of integrating low-rank approximations within this framework to maximize the potential of the tSqRA.
Strong Branching (SB) is a cornerstone of all modern branching rules used in the Branch-and-Bound (BnB) algorithm, which is at the center of Mixed-Integer Programming solvers. In its full form, SB evaluates all variables to branch on and then selects the one producing the best relaxation, leading to small trees, but high runtimes. State-of-the-art branching rules therefore use SB with working limits to achieve both small enough trees and short run times. So far, these working limits have been established empirically. In this paper, we introduce a theoretical approach to guide how much SB to use at each node within the BnB. We first define an abstract stochastic tree model of the BnB algorithm where the geometric mean dual gains of all variables follow a given probability distribution. This model allows us to relate expected dual gains to tree sizes and explicitly compare the cost of sampling an additional SB candidate with the reward in expected tree size reduction. We then leverage the insight from the abstract model to design a new stopping criterion for SB, which fits a distribution to the dual gains and, at each node, dynamically continues or interrupts SB. This algorithm, which we refer to as Probabilistic Lookahead Strong Branching, improves both the tree size and runtime over MIPLIB instances, providing evidence that the method not only changes the amount of SB, but allocates it better.
It has been shown that any 9 by 9 Sudoku puzzle must contain at least 17 clues to have a unique solution. This paper investigates the more specific question: given a particular completed Sudoku grid, what is the minimum number of clues in any puzzle whose unique solution is the given grid? We call this problem the Minimum Sudoku Clue Problem (MSCP). We formulate MSCP as a binary bilevel linear program, present a class of globally valid inequalities, and provide a computational study on 50 MSCP instances of 9 by 9 Sudoku grids. Using a general bilevel solver, we solve 95% of instances to optimality, and show that the solution process benefits from the addition of a moderate amount of inequalities. Finally, we extend the proposed model to other combinatorial problems in which uniqueness of the solution is of interest.
The Fairness-Oriented Crew Rostering Problem (FCRP) considers the joint optimization of attractiveness and fairness in cyclic crew rostering. Like many problems in scheduling and logistics, the combinatorial complexity of cyclic rostering causes exact methods to fail for large-scale practical instances. In case of the FCRP, this is accentuated by the additionally imposed fairness requirements. Hence, heuristic methods are necessary. We present a three-phase heuristic for the FCRP combining column generation techniques with variable-depth neighborhood search. The heuristic exploits different mathematical formulations to find feasible solutions and to search for improvements. We apply our methodology to practical instances from Netherlands Railways (NS), the main passenger railway operator in the Netherlands Our results show the three-phase heuristic finds good solutions for most instances and outperforms a state-of-the-art commercial solver.
Source code and simulation results: Computing eigenfrequency sensitivities near exceptional points
(2024)
Accelerated Riemannian optimization: Handling constraints with a prox to bound geometric penalties
(2022)
In multipartite Bell scenarios, we study the nonlocality robustness of the Greenberger-Horne-Zeilinger (GHZ) state. When each party performs planar measurements forming a regular polygon, we exploit the symmetry of the resulting correlation tensor to drastically accelerate the computation of (i) a Bell inequality via Frank-Wolfe algorithms and (ii) the corresponding local bound. The Bell inequalities obtained are facets of the symmetrized local polytope and they give the best-known upper bounds on the nonlocality robustness of the GHZ state for three to ten parties. Moreover, for four measurements per party, we generalize our facets and hence show, for any number of parties, an improvement on Mermin's inequality in terms of noise robustness. We also compute the detection efficiency of our inequalities and show that some give rise to the activation of nonlocality in star networks, a property that was only shown with an infinite number of measurements.
It is well known that reformulating the original problem can be crucial for the performance of mixed-integer programming (MIP) solvers. To ensure correctness, all transformations must preserve the feasibility status and optimal value of the problem, but there is currently no established methodology to express and verify the equivalence of two mixed-integer programs. In this work, we take a first step in this direction by showing how the correctness of MIP presolve reductions on – integer linear programs can be certified by using (and suitably extending) the VeriPB tool for pseudo-Boolean proof logging. Our experimental evaluation on both decision and optimization instances demonstrates the computational viability of the approach and leads to suggestions for future revisions of the proof format that will help to reduce the verbosity of the certificates and to accelerate the certification and verification process further.
This paper is concerned with the exact solution of mixed-integer programs (MIPs) over the rational numbers, i.e., without any roundoff errors and error tolerances. Here, one computational bottleneck that should be avoided whenever possible is to employ large-scale symbolic computations. Instead it is often possible to use safe directed rounding methods, e.g., to generate provably correct dual bounds. In this work, we continue to leverage this paradigm and extend an exact branch-and-bound framework by separation routines for safe cutting planes, based on the approach first introduced by Cook, Dash, Fukasawa, and Goycoolea in 2009 [INFORMS J. Comput., 21 (2009), pp. 641–649]. Constraints are aggregated safely using approximate dual multipliers from an LP solve, followed by mixed-integer rounding to generate provably valid, although slightly weaker inequalities. We generalize this approach to problem data that is not representable in floating-point arithmetic, add routines for controlling the encoding length of the resulting cutting planes, and show how these cutting planes can be verified according to the VIPR certificate standard. Furthermore, we analyze the performance impact of these cutting planes in the context of an exact MIP framework, showing that we can solve 21.5% more instances to exact optimality and reduce solving times by 26.8% on the MIPLIB 2017 benchmark test set.
We present EPR-Net, a novel and effective deep learning approach that tackles a crucial challenge in biophysics: constructing potential landscapes for high-dimensional non-equilibrium steady-state (NESS) systems. EPR-Net leverages a nice mathematical fact that the desired negative potential gradient is simply the orthogonal projection of the driving force of the underlying dynamics in a weighted inner-product space. Remarkably, our loss function has an intimate connection with the steady entropy production rate (EPR), enabling simultaneous landscape construction and EPR estimation. We introduce an enhanced learning strategy for systems with small noise, and extend our framework to include dimensionality reduction and state-dependent diffusion coefficient case in a unified fashion. Comparative evaluations on benchmark problems demonstrate the superior accuracy, effectiveness, and robustness of EPR-Net compared to existing methods. We apply our approach to challenging biophysical problems, such as an 8D limit cycle and a 52D multi-stability problem, which provide accurate solutions and interesting insights on constructed landscapes. With its versatility and power, EPR-Net offers a promising solution for diverse landscape construction problems in biophysics.
Conduction velocity in cardiac tissue is a crucial electrophysiological parameter for arrhythmia vulnerability. Pathologically reduced conduction velocity facilitates arrhythmogenesis because such conduction velocities decrease the wavelength with which re-entry may occur. Computational studies on CV and how it changes regionally in models at spatial scales multiple times larger than actual cardiac cells exist. However, microscopic conduction within cells and between them have been studied less in simulations. In this work, we study the relation of microscopic conduction patterns and clinically observable macroscopic conduction using an extracellular-membrane-intracellular model which represents cardiac tissue with these subdomains at subcellular resolution. By considering cell arrangement and non-uniform gap junction distribution, it yields anisotropic excitation propagation. This novel kind of model can for example be used to understand how discontinuous conduction on the microscopic level affects fractionation of electrograms in healthy and fibrotic tissue. Along the membrane of a cell, we observed a continuously propagating activation wavefront. When transitioning from one cell to the neighbouring one, jumps in local activation times occurred, which led to lower global conduction velocities than locally within each cell.
For decades, de Casteljau's algorithm has been used as a fundamental building block in curve and surface design and has found a wide range of applications in fields such as scientific computing, and discrete geometry to name but a few. With increasing interest in nonlinear data science, its constructive approach has been shown to provide a principled way to generalize parametric smooth curves to manifolds. These curves have found remarkable new applications in the analysis of parameter-dependent, geometric data. This article provides a survey of the recent theoretical developments in this exciting area as well as its applications in fields such as geometric morphometrics and longitudinal data analysis in medicine, archaeology, and meteorology.
In this work, we study the geodesics of the space of certain geometrically and physically motivated subspaces of the space of immersed curves endowed with a first order Sobolev metric. This includes elastic curves and also an extension of some results on planar concentric circles to surfaces. The work focuses on intrinsic and constructive approaches.