TY - JOUR A1 - Richter, Lorenz A1 - Sallandt, Leon A1 - Nüsken, Nikolas T1 - From continuous-time formulations to discretization schemes: tensor trains and robust regression for BSDEs and parabolic PDEs JF - Journal of Machine Learning Research N2 - The numerical approximation of partial differential equations (PDEs) poses formidable challenges in high dimensions since classical grid-based methods suffer from the so-called curse of dimensionality. Recent attempts rely on a combination of Monte Carlo methods and variational formulations, using neural networks for function approximation. Extending previous work (Richter et al., 2021), we argue that tensor trains provide an appealing framework for parabolic PDEs: The combination of reformulations in terms of backward stochastic differential equations and regression-type methods holds the romise of leveraging latent low-rank structures, enabling both compression and efficient computation. Emphasizing a continuous-time viewpoint, we develop iterative schemes, which differ in terms of computational efficiency and robustness. We demonstrate both theoretically and numerically that our methods can achieve a favorable trade-off between accuracy and computational efficiency. While previous methods have been either accurate or fast, we have identified a novel numerical strategy that can often combine both of these aspects. Y1 - 2024 UR - https://www.jmlr.org/papers/volume25/23-0982/23-0982.pdf VL - 25 SP - 248 ER - TY - CHAP A1 - Vaitl, Lorenz A1 - Winkler, Ludwig A1 - Richter, Lorenz A1 - Kessel, Pan T1 - Fast and unified path gradient estimators for normalizing flows T2 - International Conference on Learning Representations 2024 N2 - Recent work shows that path gradient estimators for normalizing flows have lower variance compared to standard estimators for variational inference, resulting in improved training. However, they are often prohibitively more expensive from a computational point of view and cannot be applied to maximum likelihood train- ing in a scalable manner, which severely hinders their widespread adoption. In this work, we overcome these crucial limitations. Specifically, we propose a fast path gradient estimator which improves computational efficiency significantly and works for all normalizing flow architectures of practical relevance. We then show that this estimator can also be applied to maximum likelihood training for which it has a regularizing effect as it can take the form of a given target energy func- tion into account. We empirically establish its superior performance and reduced variance for several natural sciences applications. Y1 - 2024 UR - https://openreview.net/pdf?id=zlkXLb3wpF ER - TY - JOUR A1 - Sun, Jingtong A1 - Berner, Julius A1 - Richter, Lorenz A1 - Zeinhofer, Marius A1 - Müller, Johannes A1 - Azizzadenesheli, Kamyar A1 - Anandkumar, Anima T1 - Dynamical Measure Transport and Neural PDE Solvers for Sampling N2 - The task of sampling from a probability density can be approached as transporting a tractable density function to the target, known as dynamical measure transport. In this work, we tackle it through a principled unified framework using deterministic or stochastic evolutions described by partial differential equations (PDEs). This framework incorporates prior trajectory-based sampling methods, such as diffusion models or Schrödinger bridges, without relying on the concept of time-reversals. Moreover, it allows us to propose novel numerical methods for solving the transport task and thus sampling from complicated targets without the need for the normalization constant or data samples. We employ physics-informed neural networks (PINNs) to approximate the respective PDE solutions, implying both conceptional and computational advantages. In particular, PINNs allow for simulation- and discretization-free optimization and can be trained very efficiently, leading to significantly better mode coverage in the sampling task compared to alternative methods. Moreover, they can readily be fine-tuned with Gauss-Newton methods to achieve high accuracy in sampling. Y1 - 2024 ER - TY - CHAP A1 - Winkler, Ludwig A1 - Richter, Lorenz A1 - Opper, Manfred T1 - Bridging discrete and continuous state spaces: Exploring the Ehrenfest process in time-continuous diffusion models T2 - Proceedings of the 41st International Conference on Machine Learning N2 - Generative modeling via stochastic processes has led to remarkable empirical results as well as to recent advances in their theoretical understanding. In principle, both space and time of the processes can be discrete or continuous. In this work, we study time-continuous Markov jump processes on discrete state spaces and investigate their correspondence to state-continuous diffusion processes given by SDEs. In particular, we revisit the Ehrenfest process, which converges to an Ornstein-Uhlenbeck process in the infinite state space limit. Likewise, we can show that the time-reversal of the Ehrenfest process converges to the time-reversed Ornstein-Uhlenbeck process. This observation bridges discrete and continuous state spaces and allows to carry over methods from one to the respective other setting. Additionally, we suggest an algorithm for training the time-reversal of Markov jump processes which relies on conditional expectations and can thus be directly related to denoising score matching. We demonstrate our methods in multiple convincing numerical experiments. Y1 - 2024 UR - https://raw.githubusercontent.com/mlresearch/v235/main/assets/winkler24a/winkler24a.pdf VL - 235 SP - 53017 EP - 53038 ER - TY - CHAP A1 - Richter, Lorenz A1 - Berner, Julius T1 - Improved sampling via learned diffusions T2 - International Conference on Learning Representations 2024 N2 - Recently, a series of papers proposed deep learning-based approaches to sample from unnormalized target densities using controlled diffusion processes. In this work, we identify these approaches as special cases of the Schrödinger bridge problem, seeking the most likely stochastic evolution between a given prior distribution and the specified target. We further generalize this framework by introducing a variational formulation based on divergences between path space measures of time-reversed diffusion processes. This abstract perspective leads to practical losses that can be optimized by gradient-based algorithms and includes previous objectives as special cases. At the same time, it allows us to consider divergences other than the reverse Kullback-Leibler divergence that is known to suffer from mode collapse. In particular, we propose the so-called log-variance loss, which exhibits favorable numerical properties and leads to significantly improved performance across all considered approaches. Y1 - 2024 UR - https://openreview.net/pdf?id=h4pNROsO06 ER - TY - CHAP A1 - Berner, Julius A1 - Richter, Lorenz A1 - Ullrich, Karen T1 - An optimal control perspective on diffusion-based generative modeling T2 - Transactions on Machine Learning Research N2 - We establish a connection between stochastic optimal control and generative models based on stochastic differential equations (SDEs) such as recently developed diffusion probabilistic models. In particular, we derive a Hamilton-Jacobi-Bellman equation that governs the evolution of the log-densities of the underlying SDE marginals. This perspective allows to transfer methods from optimal control theory to generative modeling. First, we show that the evidence lower bound is a direct consequence of the well-known verification theorem from control theory. Further, we develop a novel diffusion-based method for sampling from unnormalized densities -- a problem frequently occurring in statistics and computational sciences. Y1 - 2024 UR - https://openreview.net/forum?id=oYIjw37pTP ER - TY - JOUR A1 - Hartmann, Carsten A1 - Richter, Lorenz T1 - Nonasymptotic bounds for suboptimal importance sampling JF - SIAM/ASA Journal on Uncertainty Quantification N2 - Importance sampling is a popular variance reduction method for Monte Carlo estimation, where an evident question is how to design good proposal distributions. While in most cases optimal (zero-variance) estimators are theoretically possible, in practice only suboptimal proposal distributions are available and it can often be observed numerically that those can reduce statistical performance significantly, leading to large relative errors and therefore counteracting the original intention. Previous analysis on importance sampling has often focused on asymptotic arguments that work well in a large deviations regime. In this article, we provide lower and upper bounds on the relative error in a nonasymptotic setting. They depend on the deviation of the actual proposal from optimality, and we thus identify potential robustness issues that importance sampling may have, especially in high dimensions. We particularly focus on path sampling problems for diffusion processes with nonvanishing noise, for which generating good proposals comes with additional technical challenges. We provide numerous numerical examples that support our findings and demonstrate the applicability of the derived bounds. Y1 - 2024 U6 - https://doi.org/10.1137/21M1427760 VL - 12 IS - 2 SP - 309 EP - 346 ER -