TY - CHAP A1 - Blessing, Denis A1 - Berner, Julius A1 - Richter, Lorenz A1 - Neumann, Gerhard T1 - Underdamped Diffusion Bridges with Applications to Sampling T2 - 13th International Conference on Learning Representations (ICLR 2025) N2 - We provide a general framework for learning diffusion bridges that transport prior to target distributions. It includes existing diffusion models for generative modeling, but also underdamped versions with degenerate diffusion matrices, where the noise only acts in certain dimensions. Extending previous findings, our framework allows to rigorously show that score matching in the underdamped case is indeed equivalent to maximizing a lower bound on the likelihood. Motivated by superior convergence properties and compatibility with sophisticated numerical integration schemes of underdamped stochastic processes, we propose \emph{underdamped diffusion bridges}, where a general density evolution is learned rather than prescribed by a fixed noising process. We apply our method to the challenging task of sampling from unnormalized densities without access to samples from the target distribution. Across a diverse range of sampling problems, our approach demonstrates state-of-the-art performance, notably outperforming alternative methods, while requiring significantly fewer discretization steps and no hyperparameter tuning. Y1 - 2025 UR - https://openreview.net/attachment?id=Q1QTxFm0Is&name=pdf ER - TY - JOUR A1 - Berner, Julius A1 - Richter, Lorenz A1 - Sendera, Marcin A1 - Rector-Brooks, Jarrid A1 - Malkin, Nikolay T1 - From discrete-time policies to continuous-time diffusion samplers: Asymptotic equivalences and faster training N2 - We study the problem of training neural stochastic differential equations, or diffusion models, to sample from a Boltzmann distribution without access to target samples. Existing methods for training such models enforce time-reversal of the generative and noising processes, using either differentiable simulation or off-policy reinforcement learning (RL). We prove equivalences between families of objectives in the limit of infinitesimal discretization steps, linking entropic RL methods (GFlowNets) with continuous-time objects (partial differential equations and path space measures). We further show that an appropriate choice of coarse time discretization during training allows greatly improved sample efficiency and the use of time-local objectives, achieving competitive performance on standard sampling benchmarks with reduced computational cost. Y1 - 2025 ER - TY - CHAP A1 - Chen, Junhua A1 - Richter, Lorenz A1 - Berner, Julius A1 - Blessing, Denis A1 - Neumann, Gerhard A1 - Anandkumar, Anima T1 - Sequential Controlled Langevin Diffusions T2 - 13th International Conference on Learning Representations (ICLR 2025) N2 - An effective approach for sampling from unnormalized densities is based on the idea of gradually transporting samples from an easy prior to the complicated target distribution. Two popular methods are (1) Sequential Monte Carlo (SMC), where the transport is performed through successive annealed densities via prescribed Markov chains and resampling steps, and (2) recently developed diffusion-based sampling methods, where a learned dynamical transport is used. Despite the common goal, both approaches have different, often complementary, advantages and drawbacks. The resampling steps in SMC allow focusing on promising regions of the space, often leading to robust performance. While the algorithm enjoys asymptotic guarantees, the lack of flexible, learnable transitions can lead to slow convergence. On the other hand, diffusion-based samplers are learned and can potentially better adapt themselves to the target at hand, yet often suffer from training instabilities. In this work, we present a principled framework for combining SMC with diffusion-based samplers by viewing both methods in continuous time and considering measures on path space. This culminates in the new Sequential Controlled Langevin Diffusion (SCLD) sampling method, which is able to utilize the benefits of both methods and reaches improved performance on multiple benchmark problems, in many cases using only 10% of the training budget of previous diffusion-based samplers. Y1 - 2025 UR - https://openreview.net/pdf?id=dImD2sgy86 ER - TY - JOUR A1 - Reuss, Joana A1 - Macdonald, Jan A1 - Becker, Simon A1 - Schultka, Konrad A1 - Richter, Lorenz A1 - Körner, Marco T1 - Meta-learning For Few-Shot Time Series Crop Type Classification: A Benchmark On The EuroCropsML Dataset N2 - Spatial imbalances in crop type data pose significant challenges for accurate classification in remote sensing applications. Algorithms aiming at transferring knowledge from data-rich to data-scarce tasks have thus surged in popularity. However, despite their effectiveness in previous evaluations, their performance in challenging real-world applications is unclear and needs to be evaluated. This study benchmarks transfer learning and several meta-learning algorithms, including (First-Order) Model-Agnostic Meta-Learning ((FO)-MAML), Almost No Inner Loop (ANIL), and Task-Informed Meta-Learning (TIML), on the real-world EuroCropsML time series dataset, which combines farmer-reported crop data with Sentinel-2 satellite observations from Estonia, Latvia, and Portugal. Our findings indicate that MAML-based meta-learning algorithms achieve slightly higher accuracy compared to simpler transfer learning methods when applied to crop type classification tasks in Estonia after pre-training on data from Latvia. However, this improvement comes at the cost of increased computational demands and training time. Moreover, we find that the transfer of knowledge between geographically disparate regions, such as Estonia and Portugal, poses significant challenges to all investigated algorithms. These insights underscore the trade-offs between accuracy and computational resource requirements in selecting machine learning methods for real-world crop type classification tasks and highlight the difficulties of transferring knowledge between different regions of the Earth. To facilitate future research in this domain, we present the first comprehensive benchmark for evaluating transfer and meta-learning methods for crop type classification under real-world conditions. The corresponding code is publicly available at this https URL. Y1 - 2025 ER - TY - GEN A1 - Raharinirina, N. Alexia A1 - Weber, Marcus A1 - Birk, Ralph A1 - Fackeldey, Konstantin A1 - Klasse, Sarah M. A1 - Richter, Tonio Sebastian T1 - Different Tools and Results for Correspondence Analysis N2 - This is a list of codes generated from ancient egyptian texts. The codes are used for a correspondence analysis (CA). Codes and CA software are available from the linked webpage. Y1 - 2021 U6 - https://doi.org/10.12752/8257 N1 - A detailed description of the software can be found in the code repository at https://github.com/AlexiaNomena/Correspondence_Analysis_User_Friendly (repository version of CA software might include updates). ER - TY - JOUR A1 - Ribera Borrell, Enric A1 - Quer, Jannes A1 - Richter, Lorenz A1 - Schütte, Christof T1 - Improving control based importance sampling strategies for metastable diffusions via adapted metadynamics JF - SIAM Journal on Scientific Computing (SISC) N2 - Sampling rare events in metastable dynamical systems is often a computationally expensive task and one needs to resort to enhanced sampling methods such as importance sampling. Since we can formulate the problem of finding optimal importance sampling controls as a stochastic optimization problem, this then brings additional numerical challenges and the convergence of corresponding algorithms might as well suffer from metastabilty. In this article we address this issue by combining systematic control approaches with the heuristic adaptive metadynamics method. Crucially, we approximate the importance sampling control by a neural network, which makes the algorithm in principle feasible for high dimensional applications. We can numerically demonstrate in relevant metastable problems that our algorithm is more effective than previous attempts and that only the combination of the two approaches leads to a satisfying convergence and therefore to an efficient sampling in certain metastable settings. KW - importance sampling KW - stochastic optimal control KW - rare event simulation KW - metastability KW - neural networks KW - metadynamics Y1 - 2023 U6 - https://doi.org/10.1137/22M1503464 VL - 89 IS - 1 ER - TY - CHAP A1 - Berner, Julius A1 - Richter, Lorenz A1 - Ullrich, Karen T1 - An optimal control perspective on diffusion-based generative modeling T2 - Transactions on Machine Learning Research N2 - We establish a connection between stochastic optimal control and generative models based on stochastic differential equations (SDEs) such as recently developed diffusion probabilistic models. In particular, we derive a Hamilton-Jacobi-Bellman equation that governs the evolution of the log-densities of the underlying SDE marginals. This perspective allows to transfer methods from optimal control theory to generative modeling. First, we show that the evidence lower bound is a direct consequence of the well-known verification theorem from control theory. Further, we develop a novel diffusion-based method for sampling from unnormalized densities -- a problem frequently occurring in statistics and computational sciences. Y1 - 2024 UR - https://openreview.net/forum?id=oYIjw37pTP ER - TY - JOUR A1 - Nüsken, Nikolas A1 - Richter, Lorenz T1 - Interpolating between BSDEs and PINNs: deep learning for elliptic and parabolic boundary value problems JF - Journal of Machine Learning N2 - Solving high-dimensional partial differential equations is a recurrent challenge in economics, science and engineering. In recent years, a great number of computational approaches have been developed, most of them relying on a combination of Monte Carlo sampling and deep learning based approximation. For elliptic and parabolic problems, existing methods can broadly be classified into those resting on reformulations in terms of backward stochastic differential equations (BSDEs) and those aiming to minimize a regression-type L2-error (physics-informed neural networks, PINNs). In this paper, we review the literature and suggest a methodology based on the novel diffusion loss that interpolates between BSDEs and PINNs. Our contribution opens the door towards a unified understanding of numerical approaches for high-dimensional PDEs, as well as for implementations that combine the strengths of BSDEs and PINNs. The diffusion loss furthermore bears close similarities to (least squares) temporal difference objectives found in reinforcement learning. We also discuss eigenvalue problems and perform extensive numerical studies, including calculations of the ground state for nonlinear Schr ¨odinger operators and committor functions relevant in molecular dynamics. Y1 - 2023 U6 - https://doi.org/0.4208/jml.220416 VL - 2 IS - 1 SP - 31 EP - 64 ER - TY - JOUR A1 - Hartmann, Carsten A1 - Richter, Lorenz T1 - Nonasymptotic bounds for suboptimal importance sampling JF - SIAM/ASA Journal on Uncertainty Quantification N2 - Importance sampling is a popular variance reduction method for Monte Carlo estimation, where an evident question is how to design good proposal distributions. While in most cases optimal (zero-variance) estimators are theoretically possible, in practice only suboptimal proposal distributions are available and it can often be observed numerically that those can reduce statistical performance significantly, leading to large relative errors and therefore counteracting the original intention. Previous analysis on importance sampling has often focused on asymptotic arguments that work well in a large deviations regime. In this article, we provide lower and upper bounds on the relative error in a nonasymptotic setting. They depend on the deviation of the actual proposal from optimality, and we thus identify potential robustness issues that importance sampling may have, especially in high dimensions. We particularly focus on path sampling problems for diffusion processes with nonvanishing noise, for which generating good proposals comes with additional technical challenges. We provide numerous numerical examples that support our findings and demonstrate the applicability of the derived bounds. Y1 - 2024 U6 - https://doi.org/10.1137/21M1427760 VL - 12 IS - 2 SP - 309 EP - 346 ER - TY - CHAP A1 - Richter, Lorenz A1 - Berner, Julius T1 - Robust SDE-Based Variational Formulations for Solving Linear PDEs via Deep Learning T2 - Proceedings of the 39th International Conference on Machine Learning, PMLR N2 - The combination of Monte Carlo methods and deep learning has recently led to efficient algorithms for solving partial differential equations (PDEs) in high dimensions. Related learning problems are often stated as variational formulations based on associated stochastic differential equations (SDEs), which allow the minimization of corresponding losses using gradient-based optimization methods. In respective numerical implementations it is therefore crucial to rely on adequate gradient estimators that exhibit low variance in order to reach convergence accurately and swiftly. In this article, we rigorously investigate corresponding numerical aspects that appear in the context of linear Kolmogorov PDEs. In particular, we systematically compare existing deep learning approaches and provide theoretical explanations for their performances. Subsequently, we suggest novel methods that can be shown to be more robust both theoretically and numerically, leading to substantial performance improvements. Y1 - 2022 VL - 162 SP - 18649 EP - 18666 ER -