The 100 most recently published documents
Critical surface adsorption of confined binary liquids with locally conserved mass and composition
(2024)
Close to a solid surface, the properties of a fluid deviate significantly from their bulk values. In this context, we study the surface adsorption profiles of a symmetric binary liquid confined to a slit pore by means of molecular dynamics simulations; the latter naturally entails that mass and concentration are locally conserved. Near a bulk consolute point, where the liquid exhibits a demixing transition with the local concentration as the order parameter, we determine the order parameter profiles and characterise the relevant critical scaling behaviour, in the regime of strong surface attraction, for a range of pore widths and temperatures. The obtained order parameter profiles decay monotonically near the surfaces, also in the presence of a pronounced layering in the number density. Overall, our results agree qualitatively with recent theoretical predictions from a mesoscopic field-theoretical approach for the canonical ensemble.
Existence and uniqueness of solutions of the Koopman--von Neumann equation on bounded domains
(2024)
Kissing polytopes
(2024)
We consider nonlinearly constrained optimization problems and discuss a generic double-loop framework consisting of four algorithmic ingredients that unifies a broad range of nonlinear optimization solvers. This framework has been implemented in the open-source solver Uno, a Swiss-army knife-like C++ optimization framework that unifies many nonlinearly constrained nonconvex optimization solvers. We illustrate the framework with a sequential quadratic programming (SQP) algorithm that maintain an acceptable upper bound on the constraint violation , called a funnel, that is monotonically decreased to control the feasibility of the iterates. Infeasible quadratic subproblems are handled by a feasibility restoration strategy. Globalization is controlled by a line search or a trust-region method. We prove global convergence of the trust-region funnel SQP method, building upon known results from filter methods. We implement the algorithm in Uno, and we provide extensive test results for the trust-region line-search funnel SQP on small CUTEst instances.
MorphoHaptics: An Open-Source Tool for Visuohaptic Exploration of Morphological Image Datasets
(2024)
Although digital methods have significantly advanced morphology, practitioners are still challenged to understand and process tomographic data of specimens. As automated processing of fossil data is still insufficient, morphologists still engage in intensive manual work to digitally prepare fossils for research objectives. We present an open-source tool that enables morphologists to explore tomographic data similarly to the physical workflows that traditional fossil preparators experience in the field. Using questionnaires, we assessed the usability of our prototype for virtual fossil preparation and related common tasks in the digital preparation workflow. Our findings indicate that integrating haptics into the virtual preparation workflow enhances the understanding of the morphology and material properties of working specimens and that the visuohaptic sculpting of fossil volumes is straightforward and is an improvement over current digital specimen processing methods.
In particle systems, flocking refers to the phenomenon where particles’ individual velocities eventually align. The Cucker-Smale model is a well-known mathematical framework that describes this behaviour. Many continuous descriptions of the Cucker-Smale model use PDEs with both particle position and velocity as independent variables, thus providing a full description of the particles mean-field limit (MFL) dynamics. In this paper, we introduce a novel reduced inertial PDE model consisting of two equations that depend solely on particle position. In contrast to other reduced models, ours is not derived from the MFL, but directly includes the model reduction at the level of the empirical densities, thus allowing for a straightforward connection to the underlying particle dynamics. We present a thorough analytical investigation of our reduced model, showing that: firstly, our reduced PDE satisfies a natural and interpretable continuous definition of flocking; secondly, in specific cases, we can fully quantify the discrepancy between PDE solution and particle system. Our theoretical results are supported by numerical simulations.
An atmospheric front is an imaginary surface that separates two distinct air masses and is commonly defined as the warm-air side of a frontal zone with high gradients of atmospheric temperature and humidity. These fronts are a widely used conceptual model in meteorology, which are often encountered in the literature as two-dimensional (2D) front lines on surface analysis charts. This paper presents a method for computing three-dimensional (3D) atmospheric fronts as surfaces that is capable of extracting continuous and well-confined features suitable for 3D visual analysis, spatio-temporal tracking, and statistical analyses. Recently developed contour-based methods for 3D front extraction rely on computing the third derivative of a moist potential temperature field. Additionally, they require the field to be smoothed to obtain continuous large-scale structures. This paper demonstrates the feasibility of an alternative method to front extraction using ridge surface computation. The proposed method requires only the sec- ond derivative of the input field and produces accurate structures even from unsmoothed data. An application of the ridge-based method to a data set corresponding to Cyclone Friederike demonstrates its benefits and utility towards visual analysis of the full 3D structure of fronts.
ISOKANN.jl
(2024)
Generative modeling via stochastic processes has led to remarkable empirical results as well as to recent advances in their theoretical understanding. In principle, both space and time of the processes can be discrete or continuous. In this work, we study time-continuous Markov jump processes on discrete state spaces and investigate their correspondence to state-continuous diffusion processes given by SDEs. In particular, we revisit the Ehrenfest process, which converges to an Ornstein-Uhlenbeck process in the infinite state space limit. Likewise, we can show that the time-reversal of the Ehrenfest process converges to the time-reversed Ornstein-Uhlenbeck process. This observation bridges discrete and continuous state spaces and allows to carry over methods from one to the respective other setting. Additionally, we suggest an algorithm for training the time-reversal of Markov jump processes which relies on conditional expectations and can thus be directly related to denoising score matching. We demonstrate our methods in multiple convincing numerical experiments.
The task of sampling from a probability density can be approached as transporting a tractable density function to the target, known as dynamical measure transport. In this work, we tackle it through a principled unified framework using deterministic or stochastic evolutions described by partial differential equations (PDEs). This framework incorporates prior trajectory-based sampling methods, such as diffusion models or Schrödinger bridges, without relying on the concept of time-reversals. Moreover, it allows us to propose novel numerical methods for solving the transport task and thus sampling from complicated targets without the need for the normalization constant or data samples. We employ physics-informed neural networks (PINNs) to approximate the respective PDE solutions, implying both conceptional and computational advantages. In particular, PINNs allow for simulation- and discretization-free optimization and can be trained very efficiently, leading to significantly better mode coverage in the sampling task compared to alternative methods. Moreover, they can readily be fine-tuned with Gauss-Newton methods to achieve high accuracy in sampling.
Recent work shows that path gradient estimators for normalizing flows have lower
variance compared to standard estimators for variational inference, resulting in
improved training. However, they are often prohibitively more expensive from a
computational point of view and cannot be applied to maximum likelihood train-
ing in a scalable manner, which severely hinders their widespread adoption. In
this work, we overcome these crucial limitations. Specifically, we propose a fast
path gradient estimator which improves computational efficiency significantly and
works for all normalizing flow architectures of practical relevance. We then show
that this estimator can also be applied to maximum likelihood training for which
it has a regularizing effect as it can take the form of a given target energy func-
tion into account. We empirically establish its superior performance and reduced
variance for several natural sciences applications.
The numerical approximation of partial differential equations (PDEs) poses formidable challenges in high dimensions since classical grid-based methods suffer from the so-called curse of dimensionality. Recent attempts rely on a combination of Monte Carlo methods and variational formulations, using neural networks for function approximation. Extending previous work (Richter et al., 2021), we argue that tensor trains provide an appealing framework for parabolic PDEs: The combination of reformulations in terms of backward stochastic differential equations and regression-type methods holds the romise of leveraging latent low-rank structures, enabling both compression and efficient computation. Emphasizing a continuous-time viewpoint, we develop iterative schemes, which differ in terms of computational efficiency and robustness. We demonstrate both theoretically and numerically that our methods can achieve a favorable trade-off between accuracy and computational efficiency. While previous methods have been either accurate or fast, we have identified a novel numerical strategy that can often combine both of these aspects.
Dendroid stony corals build highly complex colonies that develop from a single coral polyp sitting in a cup-like skeleton, called corallite, by asexual reproduction, resulting in a tree-like branching pattern of its skeleton. Despite their beauty and ecological importance as reef builders in tropical shallow-water reefs as well as in cold-water coral mounds in the deep ocean, systematic studies investigating the ontogenetic morphological development of such coral colonies are largely missing. One reason for this is the sheer number of corallites – up to several thousands in a single coral colony. Another limiting factor, especially for the analysis of dendroid cold-water corals, is the existence of many secondary joints in the ideally tree-like structure that make a reconstruction of the skeleton tree extremely tedious.
Herein, we present CoDA, the Coral Dendroid structure Analyzer, a visual analytics suite that allows for the first time to investigate the ontogenetic morphological development of complex dendroid coral colonies, exemplified on three important framework-forming dendroid cold-water corals: Lophelia pertusa (Linnaeus, 1758), Madrepora oculata (Linnaeus, 1758), and Goniocorella dumosa (Alcock, 1902). Input to CoDA is an initial instance segmentation of the coral polyp cavities (calices), from which it estimates the skeleton tree of the colony and extracts classical morphological measurements and advanced shape features of the individual corallites. CoDA also works as a proofreading and error correction tool by helping to identify wrong parts in the skeleton tree and providing tools to quickly correct these errors. The final skeleton tree enables the derivation of additional information about the calices/corallite instances that otherwise could not be obtained, including their ontogenetic generation and branching patterns – the basis of a fully quantitative statistical analysis of the coral colony morphology. Part of CoDA is CoDA.Graph, a feature-rich link-and-brush user interface for visualizing the extracted features and 2D graph layouts of the skeleton tree, enabling the real-time exploration of complex coral colonies and their building blocks, the individual corallites and branches.
In the future, we expect CoDA to greatly facilitate the analysis of large stony corals of different species and morphotypes, as well as other dendroid structures, enabling new insights into the influence of genetic and environmental factors on their ontogenetic morphological development.
This thesis introduces the novel hybrid algorithm DisCOptER for globally optimal flight planning. DisCOptER (Discrete-Continuous Optimization for Enhanced Resolution) com- bines discrete and continuous optimization in a two-stage approach to find optimal trajectories up to arbitrary precision in finite time. In the discrete phase, a directed auxiliary graph is created in order to define a set of candidate paths that densely covers the relevant part of the trajectory space. Then, Yen’s algorithm is employed to identify a set of promising candidate paths. These are used as starting points for the subsequent stage in which they are refined with a locally convergent optimal control method. The correctness, accuracy, and complexity of DisCOptER are intricately linked to the choice of the switch-over point, defined by the discretization coarseness. Only a sufficiently dense graph enables the algorithm to find a path within the convex domain surrounding the global minimizer. Initialized with such a path, the second stage rapidly converges to the optimum. Conversely, an excessively dense graph poses the risk of overly costly and redundant computations. The determination of the optimal switch-over point necessitates a profound understanding of the local behavior of the problem, the approximation properties of the graph, and the convergence characteristics of the employed optimal control method. These topics are explored extensively in this thesis. Crucially, the density of the auxiliary graph is solely dependent on the en- vironmental conditions, yet independent of the desired solution accuracy. As a consequence, the algorithm inherits the superior asymptotic convergence properties of the optimal control stage. The practical implications of this computational efficiency are demonstrated in realistic environments, where the DisCOptER algorithm consistently delivers highly accurate globally optimal trajectories with exceptional computational efficiency. This notable improvement upon existing approaches underscores the algorithm’s significance. Beyond its technical prowess, the DisCOptER algorithm stands as a valuable tool contributing to the reduction of costs and the overall enhancement of flight operations efficiency.
It is necessary to consider the nonlinear effects such as inertia force, gravity, viscosity, and surface tension on interface stability to understand the instability mechanism of an interface between two fluid layers comprehensively. Our study thus focuses on the viscosity’s impact on the interface stability of viscosity fluids by comparing the linear stability theory to the modified S-CLSVOF method. Our results show that when the relative velocity (U) between the two fluid layers is below a particular critical value determined by the Kelvin-Helmholtz instability (KHI) theory, the waves at the interface do not exhibit divergence, i.e., the shape of waves remain stable and undistorted. Conversely, when U significantly exceeds the critical value established by the KHI theory, the waves become highly distorted and unstable. In addition, U has non-linear advection effects in the momentum equation and could distort the waves.
Magnetic nano/microrotors are passive elements that spin around an axis due to an external rotating field while remaining confined to a close plane. They have been used to date in different applications related to fluid mixing, drug delivery or biomedicine. Here we realize an active version of a magnetic microgyroscope which is simultaneously driven by a photo-activated catalytic reaction and a rotating magnetic field. We investigate the uplift dynamics of this colloidal spinner when it stands up and precesses around its long axis while self-propelling due to the light induced decomposition of hydrogen peroxide in water. By combining experiments with theory, we show that activity emerging from the cooperative action of phoretic and osmotic forces effectively increase the gravitational torque which counteracts the magnetic and viscous ones, and carefully measure its contribution.
The transport of individual particles in inhomogeneous environments is complex and exhibits non-Markovian responses. The latter may be quantified by a memory function within the framework of the linear generalised Langevin equation (GLE). Here, we exemplify the implications of steady driving on the memory function of a colloidal model system for Brownian motion in a corrugated potential landscape, specifically, for one-dimensional motion in a sinusoidal potential. To this end, we consider the overdamped limit of the GLE, which is facilitated by separating the memory function into a singular (Markovian) and a regular (non-Markovian) part. Relying on exact solutions for the investigated model, we show that the random force entering the GLE must display a bias far from equilibrium, which corroborates a recent general prediction. Based on data for the mean-square displacement (MSD) obtained from Brownian dynamics simulations, we estimate the memory function for different driving strengths and show that already moderate driving accelerates the decay of the memory function by several orders of magnitude in time. We find that the memory may persist on much longer timescales than expected from the convergence of the MSD to its long-time asymptote. Furthermore, the functional form of the memory function changes from a monotonic decay to a non-monotonic, damped oscillatory behaviour, which can be understood from a competition of confined motion and depinning. Our analysis of the simulation data further reveals a pronounced non-Gaussianity, which questions the Gaussian approximation of the random force entering the GLE.
Time-evolving graphs arise frequently when modeling complex dynamical systems such as social networks, traffic flow, and biological processes. Developing techniques to identify and analyze communities in these time-varying graph structures is an important challenge. In this work, we generalize existing spectral clustering algorithms from static to dynamic graphs using canonical correlation analysis (CCA) to capture the temporal evolution of clusters. Based on this extended canonical correlation framework, we define the dynamic graph Laplacian and investigate its spectral properties. We connect these concepts to dynamical systems theory via transfer operators, and illustrate the advantages of our method on benchmark graphs by comparison with existing methods. We show that the dynamic graph Laplacian allows for a clear interpretation of cluster structure evolution over time for directed and undirected graphs.
This work proposes stochastic partial differential equations (SPDEs) as a practical tool to replicate clustering effects of more detailed particle-based dynamics. Inspired by membrane mediated receptor dynamics on cell surfaces, we formulate a stochastic particle-based model for diffusion and pairwise interaction of particles, leading to intriguing clustering phenomena. Employing numerical simulation and cluster detection methods, we explore the approximation of the particle-based clustering dynamics through mean-field approaches. We find that SPDEs successfully reproduce spatiotemporal clustering dynamics, not only in the initial cluster formation period, but also on longer time scales where the successive merging of clusters cannot be tracked by deterministic mean-field models. The computational efficiency of the SPDE approach allows us to generate extensive statistical data for parameter estimation in a simpler model that uses a Markov jump process to capture the temporal evolution of the cluster number.
Simulation-based digital twins must provide accurate, robust and reliable digital representations of their physical counterparts. Quantifying the uncertainty in their predictions plays, therefore, a key role in making better-informed decisions that impact the actual system. The update of the simulation model based on data must be then carefully implemented. When applied to complex standing structures such as bridges, discrepancies between the computational model and the real system appear as model bias, which hinders the trustworthiness of the digital twin and increases its uncertainty. Classical Bayesian updating approaches aiming to infer the model parameters often fail at compensating for such model bias, leading to overconfident and unreliable predictions. In this paper, two alternative model bias identification approaches are evaluated in the context of their applicability to digital twins of bridges. A modularized version of Kennedy and O'Hagan's approach and another one based on Orthogonal Gaussian Processes are compared with the classical Bayesian inference framework in a set of representative benchmarks. Additionally, two novel extensions are proposed for such models: the inclusion of noise-aware kernels and the introduction of additional variables not present in the computational model through the bias term. The integration of such approaches in the digital twin corrects the predictions, quantifies their uncertainty, estimates noise from unknown physical sources of error and provides further insight into the system by including additional pre-existing information without modifying the computational model.
This paper introduces a novel hybrid mathematical modeling approach that effectively couples Partial Differential Equations (PDEs) with Ordinary Differential Equations (ODEs), exemplified through the simulation of epidemiological processes. The hybrid model aims to integrate the spatially detailed representation of disease dynamics provided by PDEs with the computational efficiency of ODEs. In the presented epidemiological use-case, this integration allows for the rapid assessment of public health interventions and the potential impact of infectious diseases across large populations. We discuss the theoretical formulation of the hybrid PDE-ODE model, including the governing equations and boundary conditions. The model's capabilities are demonstrated through detailed simulations of disease spread in synthetic environments and real-world scenarios, specifically focusing on the regions of Lombardy, Italy, and Berlin, Germany. Results indicate that the hybrid model achieves a balance between computational speed and accuracy, making it a valuable tool for policymakers in real-time decision-making and scenario analysis in epidemiology and potentially in other fields requiring similar modeling approaches.
In recent years, the use of simulation-based digital twins for monitoring and assessment of complex mechanical systems has greatly expanded. Their potential to increase the information obtained from limited data makes them an invaluable tool for a broad range of real-world applications. Nonetheless, there usually exists a discrepancy between the predicted response and the measurements of the system once built. One of the main contributors to this difference in addition to miscalibrated model parameters is the model error. Quantifying this socalled model bias (as well as proper values for the model parameters) is critical for the reliable performance of digital twins. Model bias identification is ultimately an inverse problem where information from measurements is used to update the original model. Bayesian formulations can tackle this task. Including the model bias as a parameter to be inferred enables the use of a Bayesian framework to obtain a probability distribution that represents the uncertainty between the measurements and the model. Simultaneously, this procedure can be combined with a classic parameter updating scheme to account for the trainable parameters in the original model.
This study evaluates the effectiveness of different model bias identification approaches based on Bayesian inference methods. This includes more classical approaches such as direct parameter estimation using MCMC in a Bayesian setup, as well as more recent proposals such as stat-FEM or orthogonal Gaussian Processes. Their potential use in digital twins, generalization capabilities, and computational cost is extensively analyzed.
Simulation-based digital twins have emerged as a powerful tool for evaluating the mechanical response of bridges. As virtual representations of physical systems, digital twins can provide a wealth of information that complements traditional inspection and monitoring data. By incorporating virtual sensors and predictive maintenance strategies, they have the potential to improve our understanding of the behavior and performance of bridges over time. However, as bridges age and undergo regular loading and extreme events, their structural characteristics change, often differing from the predictions of their initial design. Digital twins must be continuously adapted to reflect these changes. In this article, we present a Bayesian framework for updating simulation-based digital twins in the context of bridges. Our approach integrates information from measurements to account for inaccuracies in the simulation model and quantify uncertainties. Through its implementation and assessment, this work demonstrates the potential for digital twins to provide a reliable and up-to-date representation of bridge behavior, helping to inform decision-making for maintenance and management.
We introduce a novel adaptive Gaussian Process Regression (GPR) methodology for efficient construction of surrogate models for Bayesian inverse problems with expensive forward model evaluations. An adaptive design strategy focuses on optimizing both the positioning and simulation accuracy of training data in order to reduce the computational cost of simulating training data without compromising the fidelity of the posterior distributions of parameters. The method interleaves a goal-oriented active learning algorithm selecting evaluation points and tolerances based on the expected impact on the Kullback-Leibler divergence of surrogated and true posterior with a Markov Chain Monte Carlo sampling of the posterior. The performance benefit of the adaptive approach is demonstrated for two simple test problems.
This paper describes the computational challenge developed for a computational competition held in 2023 for the 20th anniversary of the Mixed Integer Programming Workshop. The topic of this competition was reoptimization, also known as warm starting, of mixed integer linear optimization problems after slight changes to the input data for a common formulation. The challenge was to accelerate the proof of optimality of the modified instances by leveraging the information from the solving processes of previously solved instances, all while creating high-quality primal solutions. Specifically, we discuss the competition’s format, the creation of public and hidden datasets, and the evaluation criteria. Our goal is to establish a methodology for the generation of benchmark instances and an evaluation framework, along with benchmark datasets, to foster future research on reoptimization of mixed integer linear optimization problems.
In this paper we study formulations and algorithms for the cycle clustering problem, a partitioning problem over the vertex set of a directed graph with nonnegative arc weights that is used to identify cyclic behavior in simulation data generated from nonreversible Markov state models. Here, in addition to partitioning the vertices into a set of coherent clusters, the resulting clusters must be ordered into a cycle such as to maximize the total net flow in the forward direction of the cycle. We provide a problem-specific binary programming formulation and compare it to a formulation based on the reformulation-linearization technique (RLT). We present theoretical results on the polytope associated with our custom formulation and develop primal heuristics and separation routines for both formulations. In computational experiments on simulation data from biology we find that branch and cut based on the problem-specific formulation outperforms the one based on RLT.
We propose a framework for global-scale canopy height estimation based on satellite data. Our model leverages advanced data preprocessing techniques, resorts to a novel loss function designed to counter geolocation inaccuracies inherent in the ground-truth height measurements, and employs data from the Shuttle Radar Topography Mission to effectively filter out erroneous labels in mountainous regions, enhancing the reliability of our predictions in those areas. A comparison between predictions and ground-truth labels yields an MAE/RMSE of 2.43 / 4.73 (meters) overall and 4.45 / 6.72 (meters) for trees taller than five meters, which depicts a substantial improvement compared to existing global-scale products. The resulting height map as well as the underlying framework will facilitate and enhance ecological analyses at a global scale, including, but not limited to, large-scale forest and biomass monitoring.
We tackle the Optimal Experiment Design Problem, which consists of choosing experiments to run or observations to select from a finite set to estimate the parameters of a system. The objective is to maximize some measure of information gained about the system from the observations, leading to a convex integer optimization problem. We leverage Boscia.jl, a recent algorithmic framework, which is based on a nonlinear branch-and-bound algorithm with node relaxations solved to approximate optimality using Frank-Wolfe algorithms. One particular advantage of the method is its efficient utilization of the polytope formed by the original constraints which is preserved by the method, unlike alternative methods relying on epigraph-based formulations. We assess our method against both generic and specialized convex mixed-integer approaches. Computational results highlight the performance of our proposed method, especially on large and challenging instances.
In this work, we analyze two of the most fundamental algorithms in geodesically convex optimization: Riemannian gradient descent and (possibly inexact) Riemannian proximal point. We quantify their rates of convergence and produce different variants with several trade-offs. Crucially, we show the iterates naturally stay in a ball around an optimizer, of radius depending on the initial distance and, in some cases, on the curvature. Previous works simply assumed bounded iterates, resulting in rates that were not fully quantified. We also provide an implementable inexact proximal point algorithm and prove several new useful properties of Riemannian proximal methods: they work when positive curvature is present, the proximal operator does not move points away from any optimizer, and we quantify the smoothness of its induced Moreau envelope. Further, we explore beyond our theory with empirical tests.
5q-spinal muscular atrophy (SMA) is a neuromuscular disorder (NMD) that has become one of the first 5% treatable rare diseases. The efficacy of new SMA therapies is creating a dynamic SMA patient landscape, where disease progression and scoliosis development play a central role, however, remain difficult to anticipate. New approaches to anticipate disease progression and associated sequelae will be needed to continuously provide these patients the best standard of care. Here we developed an interpretable machine learning (ML) model that can function as an assistive tool in the anticipation of SMA-associated scoliosis based on disease progression markers. We collected longitudinal data from 86 genetically confirmed SMA patients. We selected six features routinely assessed over time to train a random forest classifier. The model achieved a mean accuracy of 0.77 (SD 0.2) and an average ROC AUC of 0.85 (SD 0.17). For class 1 ‘scoliosis’ the average precision was 0.84 (SD 0.11), recall 0.89 (SD 0.22), F1-score of 0.85 (SD 0.17), respectively. Our trained model could predict scoliosis using selected disease progression markers and was consistent with the radiological measurements. During post validation, the model could predict scoliosis in patients who were unseen during training. We also demonstrate that rare disease data sets can be wrangled to build predictive ML models. Interpretable ML models can function as assistive tools in a changing disease landscape and have the potential to democratize expertise that is otherwise clustered at specialized centers.
A major restriction to applying deep learning methods in cryo-electron tomography is the lack of annotated data. Many large learning-based models cannot be applied to these images due to the lack of adequate experimental ground truth. One appealing alternative solution to the time-consuming and expensive experimental data acquisition and annotation is the generation of simulated cryo-ET images. In this context, we exploit a public cryo-ET simulator called PolNet to generate three datasets of two macromolecular structures, namely the ribosomal complex 4v4r and Thermoplasma acidophilum 20S proteasome, 3j9i. We select these two specific particles to test whether our models work for macromolecular structures with and without rotational symmetry. The three datasets contain 50, 150, and 450 tomograms with a voxel size of 10 ̊A, respectively. Here, we publish patches of size 40 × 40 × 40 extracted from the medium-sized dataset with 26,703 samples of 4v4r and 40,671 samples of 3j9i. The original tomograms from which the samples were extracted are of size 500 × 500 × 250. Finally, it should be noted that the currently published test dataset is employed for reporting the results of our paper titled ”DeepOrientation: Deep Orientation Estimation of Macromolecules in Cryo-electron tomography” paper.
Although Virtual Reality (VR) has undoubtedly improved human interaction with 3D data, users still face difficulties retaining important details of complex digital objects in preparation for physical tasks. To address this issue, we evaluated the potential of visuohaptic integration to improve the memorability of virtual objects in immersive visualizations. In a user study (N=20), participants performed a delayed match-to-sample task where they memorized stimuli of visual, haptic, or visuohaptic encoding conditions. We assessed performance differences between the conditions through error rates and response time. We found that visuohaptic encoding significantly improved memorization accuracy compared to unimodal visual and haptic conditions. Our analysis indicates that integrating haptics into immersive visualizations enhances the memorability of digital objects. We discuss its implications for the optimal encoding design in VR applications that assist professionals who need to memorize and recall virtual objects in their daily work.
Gesture recognition is a tool to enable novel interactions with different techniques and
applications, like Mixed Reality and Virtual Reality environments. With all the recent
advancements in gesture recognition from skeletal data, it is still unclear how well state-of-
the-art techniques perform in a scenario using precise motions with two hands. This
paper presents the results of the SHREC 2024 contest organized to evaluate methods
for their recognition of highly similar hand motions using the skeletal spatial coordinate
data of both hands. The task is the recognition of 7 motion classes given their spatial
coordinates in a frame-by-frame motion. The skeletal data has been captured using
a Vicon system and pre-processed into a coordinate system using Blender and Vicon
Shogun Post. We created a small, novel dataset with a high variety of durations in
frames. This paper shows the results of the contest, showing the techniques created
by the 5 research groups on this challenging task and comparing them to our baseline
method.
Time-varying Extremum Graphs
(2024)
We introduce time-varying extremum graph (TVEG), a topological structure to support visualization and analysis of a time- varying scalar field. The extremum graph is a substructure of the Morse-Smale complex. It captures the adjacency relationship between cells in the Morse decomposition of a scalar field. We define the TVEG as a time-varying extension of the extremum graph and demonstrate how it captures salient feature tracks within a dynamic scalar field. We formulate the construction of the TVEG as an optimization problem and describe an algorithm for computing the graph. We also demonstrate the capabilities of TVEG towards identification and exploration of topological events such as deletion, generation, split, and merge within a dynamic scalar field via comprehensive case studies including a viscous fingers and a 3D von Kármán vortex street dataset.
The rise of digital social media has strengthened the coevolution of public opinions and social interactions, that shape social structures and collective outcomes in increasingly complex ways. Existing literature often explores this interplay as a one-directional influence, focusing on how opinions determine social ties within adaptive networks. However, this perspective overlooks the intrinsic dynamics driving social interactions, which can significantly influence how opinions form and evolve. In this work, we address this gap, by introducing the co-evolving opinion and social dynamics using stochastic agent-based models. Agents' mobility in a social space is governed by both their social and opinion similarity with others. Similarly, the dynamics of opinion formation is driven by the opinions of agents in their social vicinity. We analyze the underlying social and opinion interaction networks and explore the mechanisms influencing the appearance of emerging phenomena, like echo chambers and opinion consensus. To illustrate the model's potential for real-world analysis, we apply it to General Social Survey data on political identity and public opinion regarding governmental issues. Our findings highlight the model's strength in capturing the coevolution of social connections and individual opinions over time.
Generating simulated training data needed for constructing sufficiently accurate surrogate models to be used for efficient optimization or parameter identification can incur a huge computational effort in the offline phase. We consider a fully adaptive greedy approach to the computational design of experiments problem using gradient-enhanced Gaussian process regression as surrogates. Designs are incrementally defined by solving an optimization problem for accuracy given a certain computational budget. We address not only the choice of evaluation points but also of required simulation accuracy, both of values and gradients of the forward model.
Numerical results show a significant reduction of the computational effort compared to just position-adaptive and static designs as well as a clear benefit of including gradient information into the surrogate training.
Respiratory viral infections (RVIs) are common reasons for healthcare consultations. The inpatient management of RVIs consumes significant resources. From 2009 to 2014, we assessed the costs of RVI management in 4776 hospitalized children aged 0–18 years participating in a quality improvement program, where all ILI patients underwent virologic testing at the National Reference Centre followed by detailed recording of their clinical course. The direct (medical or non-medical) and indirect costs of inpatient management outside the ICU (‘non-ICU’) versus management requiring ICU care (‘ICU’) added up to EUR 2767.14 (non-ICU) vs. EUR 29,941.71 (ICU) for influenza, EUR 2713.14 (non-ICU) vs. EUR 16,951.06 (ICU) for RSV infections, and EUR 2767.33 (non-ICU) vs. EUR 14,394.02 (ICU) for human rhinovirus (hRV) infections, respectively. Non-ICU inpatient costs were similar for all eight RVIs studied: influenza, RSV, hRV, adenovirus (hAdV), metapneumovirus (hMPV), parainfluenza virus (hPIV), bocavirus (hBoV), and seasonal coronavirus (hCoV) infections. ICU costs for influenza, however, exceeded all other RVIs. At the time of the study, influenza was the only RVI with antiviral treatment options available for children, but only 9.8% of influenza patients (non-ICU) and 1.5% of ICU patients with influenza received antivirals; only 2.9% were vaccinated. Future studies should investigate the economic impact of treatment and prevention of influenza, COVID-19, and RSV post vaccine introduction.
Derivative-based iterative methods for nonlinearly constrained non-convex optimization usually share common algorithmic components, such as strategies for computing a descent direction and mechanisms that promote global convergence. Based on this observation, we introduce an abstract framework based on four common ingredients that describes most derivative-based iterative methods and unifies their workflows. We then present Uno, a modular C++ solver that implements our abstract framework and allows the automatic generation of various strategy combinations with no programming effort from the user. Uno is meant to (1) organize mathematical optimization strategies into a coherent hierarchy; (2) offer a wide range of efficient and robust methods that can be compared for a given instance; (3) enable researchers to experiment with novel optimization strategies; and (4) reduce the cost of development and maintenance of multiple optimization solvers. Uno's software design allows user to compose new customized solvers for emerging optimization areas such as robust optimization or optimization problems with complementarity constraints, while building on reliable nonlinear optimization techniques. We demonstrate that Uno is highly competitive against state-of-the-art solvers filterSQP, IPOPT, SNOPT, MINOS, LANCELOT, LOQO, and CONOPT on a subset of 429 small problems from the CUTEst collection. Uno is available as open-source software under the MIT license at https://github.com/cvanaret/Uno .
Modelling luminescent coupling in multi-junction solar cells: perovskite silicon tandem case study
(2024)
Machine-learning driven design of metasurfaces: learn the physics and not the objective function
(2024)
Analytical approximations of the macroscopic behavior of agent-based models (e.g.
via mean-field theory) often introduce a significant error, especially in the transient phase. For an example model called continuous-time noisy voter model, we use two data-driven approaches to learn the evolution of collective variables instead. The first approach utilizes the SINDy method to approximate the macroscopic dynamics without prior knowledge, but has proven itself to be not particularly robust. The second approach employs an informed learning strategy which includes knowledge about the agent-based model. Both approaches exhibit a considerably smaller error than the conventional analytical approximation.
How many mutually non-attacking queens can be placed on a d-dimensional chessboard of size n? The n-queens problem in higher dimensions is a generalization of the well-known n-queens problem. We provide a comprehensive overview of theoretical results, bounds, solution methods, and the interconnectivity of the problem within topics of discrete optimization and combinatorics. We present an integer programming formulation of the n-queens problem in higher dimensions and several strengthenings through additional valid inequalities. Compared to recent benchmarks, we achieve a speedup in computational time between 15-70x over all instances of the integer programs. Our computational results prove optimality of certificates for several large instances. Breaking additional, previously unsolved instances with the proposed methods is likely possible. On the primal side, we further discuss heuristic approaches to constructing solutions that turn out to be optimal when compared to the IP. We conclude with preliminary results on the number and density of the solutions.
Any sports competition needs a timetable, specifying when and where teams meet each other. The recent International Timetabling Competition (ITC2021) on sports timetabling showed that, although it is possible to develop general algorithms, the performance of each algorithm varies considerably over the problem instances. This paper provides a problem type analysis for sports timetabling, resulting in powerful insights into the strengths and weaknesses of eight state-of-the-art algorithms. Based on machine learning techniques, we propose an algorithm selection system that predicts which algorithm is likely to perform best based on the type of competition and constraints being used (i.e., the problem type) in a given sports timetabling problem instance. Furthermore, we visualize how the problem type relates to algorithm performance, providing insights and possibilities to further enhance several algorithms. Finally, we assess the empirical hardness of the instances. Our results are based on large computational experiments involving about 50 years of CPU time on more than 500 newly generated problem instances.
The Jacobi set of a bivariate scalar field is the set of points where the gradients of the two constituent scalar fields align with each other. It captures the regions of topological changes in the bivariate field. The Jacobi set is a bivariate analog of critical points, and may correspond to features of interest. In the specific case of time-varying fields and when one of the scalar fields is time, the Jacobi set corresponds to temporal tracks of critical points, and serves as a feature-tracking graph. The Jacobi set of a bivariate field or a time-varying scalar field is complex, resulting in cluttered visualizations that are difficult to analyze. This paper addresses the problem of Jacobi set simplification. Specifically, we use the time-varying scalar field scenario to introduce a method that computes a reduced Jacobi set. The method is based on a stability measure called robustness that was originally developed for vector fields and helps capture the structural stability of critical points. We also present a mathematical analysis for the method, and describe an implementation for 2D time-varying scalar fields. Applications to both synthetic and real-world datasets demonstrate the effectiveness of the method for tracking features.
Computing an optimal cycle in a given homology class, also referred to as the homology localization problem, is known to be an NP-hard problem in general. Furthermore, there is currently no known optimality criterion that localizes classes geometrically and admits a stability property under the setting of persistent homology. We present a geometric optimization of the cycles that is computable in polynomial time and is stable in an approximate sense. Tailoring our search criterion to different settings, we obtain various optimization problems like optimal homologous cycle, minimum homology basis, and minimum persistent
homology basis. In practice, the (trivial) exact algorithm is computationally expensive despite having a worst case polynomial runtime. Therefore, we design approximation algorithms for the above problems and study their performance experimentally. These algorithms have reasonable runtimes for moderate sized datasets and the cycles computed by these algorithms
are consistently of high quality as demonstrated via experiments on multiple datasets.
The Bay of Bengal (BoB) has maintained its salinity distribution over the years despite a continuous flow of fresh water entering it through rivers on the northern coast, which is capable of diluting the salinity. This can be attributed to the cyclic flow of high salinity water (>= 35 psu), coming from Arabian sea and entering BoB from the south, which moves northward and mixes with this fresh water. The movement of this high salinity water has been studied and analyzed in previous work (Singh et al., 2022). This paper extends the computational methods and analysis of salinity movement. Specifically, we introduce an advection based feature definition that represents the movement of high salinity water, and describe algorithms to track their evolution over time. This method allows us to trace the movement of high salinity water caused due to ocean currents. The method is validated via comparison with established observations on the flow of high salinity water in the BoB, including its entry from the Arabian Sea and its movement near Sri Lanka. Further, the visual analysis and tracking framework enables us to compare it with previous work and analyze the contribution of advection to salinity transport.
Studying neural mechanisms in complementary model organisms from different ecological niches in the same animal class can leverage the comparative brain analysis at the cellular level. To advance such a direction, we developed a unified brain atlas platform and specialized tools that allowed us to quantitatively compare neural structures in two teleost larvae, medaka (Oryzias latipes) and zebrafish (Danio rerio). Leveraging this quantitative approach we found that most brain regions are similar but some subpopulations are unique in each species. Specifically, we confirmed the existence of a clear dorsal pallial region in the telencephalon in medaka lacking in zebrafish. Further, our approach allows for extraction of differentially expressed genes in both species, and for quantitative comparison of neural activity at cellular resolution. The web-based and interactive nature of this atlas platform will facilitate the teleost community’s research and its easy extensibility will encourage contributions to its continuous expansion.
For industries like the cement industry, switching to a carbon-neutral production process is impossible. They must rely on carbon capture, utilization and storage (CCUS) technologies to reduce their production processes’ inevitable carbon dioxide (CO2) emissions. For transporting continuously large amounts of CO2, utilizing a pipeline network is the most effective solution; however, building such a network is expensive. Therefore minimizing the cost of the pipelines to be built is extremely important to make the operation financially feasible. In this context, we investigate the problem of finding optimal pipeline diameters from a discrete set of diameters for a tree-shaped network transporting captured CO2 from multiple sources to a single sink. The general problem of optimizing arc capacities in potential-based fluid networks is already a challenging mixed-integer nonlinear optimization problem. The problem becomes even more complex when adding the highly sensitive nonlinear behavior of CO2 regarding temperature and pressure changes. We propose an iterative algorithm splitting the problem into two parts: a) the pipe-sizing problem under a fixed supply scenario and temperature distribution and b) the thermophysical modeling, including mixing effects, the Joule-Thomson effect, and heat exchange with the surrounding environment. We demonstrate the effectiveness of our approach by applying our algorithm to a real-world network planning problem for a CO2 network in Western Germany. Further, we show the robustness of the algorithm by solving a large artificially created set of network instances.
The landscape of applications and subroutines relying on shortest path computations continues to grow steadily. This growth is driven by the undeniable success of shortest path algorithms in theory and practice. It also introduces new challenges as the models and assessing the optimality of paths become more complicated. Hence, multiple recent publications in the field adapt existing labeling methods in an ad hoc fashion to their specific problem variant without considering the underlying general structure: they always deal with multi-criteria scenarios, and those criteria define different partial orders on the paths. In this paper, we introduce the partial order shortest path problem (POSP), a generalization of the multi-objective shortest path problem (MOSP) and in turn also of the classical shortest path problem. POSP captures the particular structure of many shortest path applications as special cases. In this generality, we study optimality conditions or the lack of them, depending on the objective functions’ properties. Our final contribution is a big lookup table summarizing our findings and providing the reader with an easy way to choose among the most recent multi-criteria shortest path algorithms depending on their problems’ weight structure. Examples range from time-dependent shortest path and bottleneck path problems to the electric vehicle shortest path problem with recharging and complex financial weight functions studied in the public transportation community. Our results hold for general digraphs and, therefore, surpass previous generalizations that were limited to acyclic graphs.
The investigation of energy transition paths toward a sustainable and decarbonized future under uncertainty is a critical aspect of contemporary energy planning and policy development. There are numerous methods for analysing uncertainties and sensitivities and many studies on sustainable transformation paths, but there is a lack of combined application to relevant use-cases.
In this study, we investigate the sensitivity of energy transition paths to uncertainties in operational and investment costs of power plants in the metropolitan area of Berlin and its rural surroundings.
By employing the linear programming energy system model oemof-B3, we extensively focus on the system's energy technologies, such as wind turbines, photovoltaics, hydro and combustion plants, and energy storages. Greenhouse gas reduction and electrification rates per commodity are realized by selected constraints.
Our research aims to discern how investments in energy production capacities are influenced by uncertainties of other energy technologies' investment and operational costs in the system. We apply a quantitative approach to investigate such interdependencies of cost variations and their impact on long-term energy planning. Thus, the analysis sheds light on the robustness of energy transition paths in the face of these uncertainties.
The region Berlin-Brandenburg serves as a case study and thus reflects on the present space conflicts to meet energy demands in urban and suburban areas and their rural surroundings. An electricity-intensive scenario is selected that assumes a 100 % reduction in greenhouse gas emissions by 2050. With the results of the case study, we show how our approach enables rural and metropolitan decision-makers to collaborate in achieving sustainable energy.
Decision-making in long-term energy planning can be made more robust and flexible by acknowledging the identified sensitivities and enable such regions better to navigate challenges and uncertainties associated with sustainable energy planning.
Compressible flows are prevalent in natural and technological processes, particularly in the energy transition to renewable energy systems. Consequently, extensive research has focused on understanding the stability of tangential--velocity discontinuity in compressible media. Despite recent advancements that address industrial challenges more realistically, many studies have ignored viscous stress tensors' impact, leading to inaccuracies in predicting interface stability. This omission becomes critical, especially in high Reynolds or low Mach number flows, where viscous forces dissipate kinetic energy across interfaces, affect total energy dissipation, and dampen flow instabilities. Our work is thus motivated to analyze the viscosity force effect by including the viscous stress tensor terms in the motion equations. Our results show that by considering the effect of viscous forces, the tangential-velocity discontinuity interface is constantly destabilized for the entire range of the Mach number.
We study a complex planning and scheduling problem arising from the build-up process of air cargo pallets and containers, collectively referred to as unit load devices (ULD), in which ULDs must be assigned to workstations for loading. Since air freight usually becomes available gradually along the planning horizon, ULD build-ups must be scheduled neither too early to avoid underutilizing ULD capacity, nor too late to avoid resource conflicts with other flights. Whenever possible, ULDs should be built up in batches, thereby giving ground handlers more freedom to rearrange cargo and utilize the ULD's capacity efficiently. The resulting scheduling problem has an intricate cost function and produces large time-expanded models, especially for longer planning horizons. We propose a logic-based Benders decomposition approach that assigns batches to time intervals and workstations in the master problem, while the actual schedule is decided in a subproblem. By choosing appropriate intervals, the subproblem becomes a feasibility problem that decomposes over the workstations. Additionally, the similarity of many batches is exploited by a strengthening procedure for no-good cuts. We benchmark our approach against a time-expanded MIP formulation from the literature on a publicly available data set. It solves 15% more instances to optimality and decreases run times by more than 50% in the geometric mean. This improvement is especially pronounced for longer planning horizons of up to one week, where the Benders approach solves over 50% instances more than the baseline
The imperative to decarbonize energy systems has intensified the need for efficient transformations within the heating sector, with a particular focus on district heating networks. This study addresses this challenge by proposing a comprehensive optimization approach evaluated on the district heating
network of the Märkisches Viertel of Berlin. Our objective is to simultaneously optimize heat production with three targets: minimizing costs, minimizing CO2-emissions, and maximizing heat generation from Combined Heat and Power (CHP) plants for enhanced efficiency.
To tackle this optimization problem, we employed a Mixed-Integer Linear Program (MILP) that encompasses the conversion of various fuels into heat and power, integration with relevant markets, and considerations for technical constraints on power plant operation. These constraints include startup
and minimum downtime, activation costs, and storage limits. The ultimate goal is to delineate the Pareto front, representing the optimal trade-offs between the three targets. We evaluate variants of the 𝜖-constraint algorithm for their effectiveness in coordinating these objectives, with a simultaneous focus on the quality of the estimated Pareto front and computational efficiency. One algorithm explores solutions on an evenly spaced grid in the objective space, while another dynamically adjusts the grid based on identified solutions. Initial findings highlight the strengths and limitations of each algorithm, providing guidance on algorithm selection depending on desired outcomes and computational constraints.
Our study emphasizes that the optimal choice of algorithm hinges on the density and distribution of solutions in the feasible space. Whether solutions are clustered or evenly distributed significantly influences algorithm performance. These insights contribute to a nuanced understanding of algorithm selection for multi-objective multi-energy system optimization, offering valuable guidance for future research and practical applications for planning sustainable district heating networks.
Modeling-Simulation-Optimization workflows play a fundamental role in applied mathematics. The Mathematical Research Data Initiative, MaRDI, responded to this by developing a FAIR and machine-interpretable template for a comprehensive documentation of such workflows. MaRDMO, a Plugin for the Research Data Management Organiser, enables scientists from diverse fields to document and publish their workflows on the MaRDI Portal seamlessly using the MaRDI template. Central to these workflows are mathematical models. MaRDI addresses them with the MathModDB ontology, offering a structured formal model description. Here, we showcase the interaction between MaRDMO and the MathModDB Knowledge Graph through an algebraic modeling workflow from the Digital Humanities. This demonstration underscores the versatility of both services beyond their original numerical domain.
At present, data management plans (DMPs) are still often perceived as mere documents for funding agencies providing clarity on how research data will be handled during a funded project, but are not usually actively involved in the processes. However, they contain a great deal of information that can be shared automatically to facilitate active research data management (RDM) by providing metadata to research infrastructures and supporting communication between all involved stakeholders. This position paper brings together a number of ideas developed and collected during interdisciplinary workshops of the Data Management Planning Working Group (infra-dmp), which is part of the section Common Infrastructures of the National Research Data Infrastructure (NFDI) in Germany. We present our vision of a possible future role of DMPs, templates, and tools in the upcoming NFDI service architecture.
In applied mathematics and related disciplines, the modeling-simulation-optimization workflow is a prominent scheme, with mathematical models and numerical algorithms playing a crucial role. For these types of mathematical research data, the Mathematical Research Data Initiative has developed, merged and implemented ontologies and knowledge graphs. This contributes to making mathematical research data FAIR by introducing semantic technology and documenting the mathematical foundations accordingly. Using the concrete example of microfracture analysis of porous media, it is shown how the knowledge of the underlying mathematical model and the corresponding numerical algorithms for its solution can be represented by the ontologies.
Ontologies and knowledge graphs for mathematical algorithms and models are presented, that have been developed by the Mathematical Research Data Initiative. This enables FAIR data handling in mathematics and the applied disciplines. Moreover, challenges of harmonization during the ontology development are discussed.
MaRDMO Plugin
(2023)
MaRDMO, a plugin for the Research Data Management Organiser, was developed in the Mathematical Research Data Initiative to document interdisciplinary workflows using a standardised scheme. Interdisciplinary workflows recorded this way are published directly on the MaRDI portal. In addition, central information is integrated into the MaRDI knowledge graph. Next to the documentation, MaRDMO offers the possibility to retrieve existing interdisciplinary workflows from the MaRDI Knowledge Graph to allow the reproduction of the initial work and to provide scientists with new researchimpulses. Thus, MaRDMO creates a community-driven knowledge loop that could help to overcome the replication crisis.
Research data are crucial in mathematics and all scientific disciplines, as they form the
foundation for empirical evidence, by enabling the validation and reproducibility of scientific findings. Mathematical research data (MathRD) have become vast and complex, and their interdisciplinary potential and abstract nature make them ubiquitous in various scientific fields. The volume of data and the velocity of its creation are rapidly increasing due to advancements in data science and computing power. This complexity extends to other disciplines, resulting in diverse research data and computational models. Thus, proper handling of research data is crucial both within mathematics and for its manifold connections and exchange with other disciplines. The National Research Data Infrastructure (NFDI), funded by the federal and state governments of Germany, consists of discipline-oriented consortia, including the Mathematical Research Data Initiative (MaRDI). MaRDI has been established to develop services, guidelines and outreach measures for all aspects of MathRD, and thus support the mathematical research community. Research data management (RDM) should be an integral component of every scientific project, and is becoming a mandatory component of grants with funding bodies such as the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation). At the core of RDM are the FAIR (Findable, Accessible, Interoperable, and Reusable) principles. This document aims to guide mathematicians and researchers from related disciplines who create RDM plans. It highlights the benefits and opportunities of RDM in mathematics and interdisciplinary studies, showcases examples of diverse MathRD, and suggests technical solutions that meet the requirements of funding agencies with specific examples. The document is regularly updated to reflect the latest developments within the mathematical community represented by MaRDI.
Voronoi Graph - Improved raycasting and integration schemes for high dimensional Voronoi diagrams
(2024)
The computation of Voronoi Diagrams, or their dual Delauney triangulations is difficult in high dimensions. In a recent publication Polianskii and Pokorny propose an iterative randomized algorithm facilitating the approximation of Voronoi tesselations in high dimensions. In this paper, we provide an improved vertex search method that is not only exact but even faster than the bisection method that was previously recommended. Building on this we also provide a depth-first graph-traversal algorithm which allows us to compute the entire Voronoi diagram. This enables us to compare the outcomes with those of classical algorithms like qHull, which we either match or marginally beat in terms of computation time. We furthermore show how the raycasting algorithm naturally lends to a Monte Carlo approximation for the volume and boundary integrals of the Voronoi cells, both of which are of importance for finite Volume methods. We compare the Monte-Carlo methods to the exact polygonal integration, as well as a hybrid approximation scheme.
Introduction and application of a new approach for model-based optical bidirectional measurements
(2024)
The Koopman operator has entered and transformed many research areas over the last years. Although the underlying concept–representing highly nonlinear dynamical systems by infinite-dimensional linear operators–has been known for a long time, the availability of large data sets and efficient machine learning algorithms for estimating the Koopman operator from data make this framework extremely powerful and popular. Koopman operator theory allows us to gain insights into the characteristic global properties of a system without requiring detailed mathematical models. We will show how these methods can also be used to analyze complex networks and highlight relationships between Koopman operators and graph Laplacians.