TY - JOUR A1 - Sikorski, Alexander A1 - Ribera Borrell, Enric A1 - Weber, Marcus T1 - Learning Koopman eigenfunctions of stochastic diffusions with optimal importance sampling and ISOKANN JF - Journal of Mathematical Physics N2 - The dominant eigenfunctions of the Koopman operator characterize the metastabilities and slow-timescale dynamics of stochastic diffusion processes. In the context of molecular dynamics and Markov state modeling, they allow for a description of the location and frequencies of rare transitions, which are hard to obtain by direct simulation alone. In this article, we reformulate the eigenproblem in terms of the ISOKANN framework, an iterative algorithm that learns the eigenfunctions by alternating between short burst simulations and a mixture of machine learning and classical numerics, which naturally leads to a proof of convergence. We furthermore show how the intermediate iterates can be used to reduce the sampling variance by importance sampling and optimal control (enhanced sampling), as well as to select locations for further training (adaptive sampling). We demonstrate the usage of our proposed method in experiments, increasing the approximation accuracy by several orders of magnitude. Y1 - 2024 U6 - https://doi.org/10.1063/5.0140764 VL - 65 SP - 013502 ER - TY - JOUR A1 - Quer, Jannes A1 - Ribera Borrell, Enric T1 - Connecting Stochastic Optimal Control and Reinforcement Learning JF - Journal of Mathematical Physics N2 - In this paper the connection between stochastic optimal control and reinforcement learning is investigated. Our main motivation is to apply importance sampling to sampling rare events which can be reformulated as an optimal control problem. By using a parameterised approach the optimal control problem becomes a stochastic optimization problem which still raises some open questions regarding how to tackle the scalability to high-dimensional problems and how to deal with the intrinsic metastability of the system. To explore new methods we link the optimal control problem to reinforcement learning since both share the same underlying framework, namely a Markov Decision Process (MDP). For the optimal control problem we show how the MDP can be formulated. In addition we discuss how the stochastic optimal control problem can be interpreted in the framework of reinforcement learning. At the end of the article we present the application of two different reinforcement learning algorithms to the optimal control problem and a comparison of the advantages and disadvantages of the two algorithms. Y1 - 2024 U6 - https://doi.org/10.1063/5.0140665 VL - 65 IS - 8 ER -