TY - JOUR A1 - Klebanov, Ilja A1 - Sikorski, Alexander A1 - Schütte, Christof A1 - Röblitz, Susanna T1 - Objective priors in the empirical Bayes framework JF - Scandinavian Journal of Statistics N2 - When dealing with Bayesian inference the choice of the prior often remains a debatable question. Empirical Bayes methods offer a data-driven solution to this problem by estimating the prior itself from an ensemble of data. In the nonparametric case, the maximum likelihood estimate is known to overfit the data, an issue that is commonly tackled by regularization. However, the majority of regularizations are ad hoc choices which lack invariance under reparametrization of the model and result in inconsistent estimates for equivalent models. We introduce a nonparametric, transformation-invariant estimator for the prior distribution. Being defined in terms of the missing information similar to the reference prior, it can be seen as an extension of the latter to the data-driven setting. This implies a natural interpretation as a trade-off between choosing the least informative prior and incorporating the information provided by the data, a symbiosis between the objective and empirical Bayes methodologies. Y1 - 2021 U6 - https://doi.org/10.1111/sjos.12485 VL - 48 IS - 4 SP - 1212 EP - 1233 PB - Wiley Online Library ER - TY - GEN A1 - Klebanov, Ilja A1 - Sikorski, Alexander A1 - Schütte, Christof A1 - Röblitz, Susanna T1 - Prior estimation and Bayesian inference from large cohort data sets N2 - One of the main goals of mathematical modelling in systems biology related to medical applications is to obtain patient-specific parameterisations and model predictions. In clinical practice, however, the number of available measurements for single patients is usually limited due to time and cost restrictions. This hampers the process of making patient-specific predictions about the outcome of a treatment. On the other hand, data are often available for many patients, in particular if extensive clinical studies have been performed. Using these population data, we propose an iterative algorithm for contructing an informative prior distribution, which then serves as the basis for computing patient-specific posteriors and obtaining individual predictions. We demonsrate the performance of our method by applying it to a low-dimensional parameter estimation problem in a toy model as well as to a high-dimensional ODE model of the human menstrual cycle, which represents a typical example from systems biology modelling. T3 - ZIB-Report - 16-09 Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-57475 SN - 1438-0064 ER - TY - GEN A1 - Klebanov, Ilja A1 - Sikorski, Alexander A1 - Schütte, Christof A1 - Röblitz, Susanna T1 - Empirical Bayes Methods for Prior Estimation in Systems Medicine N2 - One of the main goals of mathematical modelling in systems medicine related to medical applications is to obtain patient-specific parameterizations and model predictions. In clinical practice, however, the number of available measurements for single patients is usually limited due to time and cost restrictions. This hampers the process of making patient-specific predictions about the outcome of a treatment. On the other hand, data are often available for many patients, in particular if extensive clinical studies have been performed. Therefore, before applying Bayes’ rule separately to the data of each patient (which is typically performed using a non-informative prior), it is meaningful to use empirical Bayes methods in order to construct an informative prior from all available data. We compare the performance of four priors - a non-informative prior and priors chosen by nonparametric maximum likelihood estimation (NPMLE), by maximum penalized lilelihood estimation (MPLE) and by doubly-smoothed maximum likelihood estimation (DS-MLE) - by applying them to a low-dimensional parameter estimation problem in a toy model as well as to a high-dimensional ODE model of the human menstrual cycle, which represents a typical example from systems biology modelling. T3 - ZIB-Report - 16-57 KW - Parameter estimation KW - Bayesian inference KW - Bayesian hierarchical modelling KW - NPMLE KW - MPLE KW - DS-MLE KW - EM algorithm KW - Jeffreys prior KW - reference prior KW - hyperparameter KW - hyperprior KW - principle of maximum entropy Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-61307 SN - 1438-0064 ER - TY - JOUR A1 - Klebanov, Ilja A1 - Schuster, Ingmar A1 - Sullivan, T. J. T1 - A rigorous theory of conditional mean embeddings JF - SIAM Journal on Mathematics of Data Science Y1 - 2020 U6 - https://doi.org/10.1137/19M1305069 VL - 2 IS - 3 SP - 583 EP - 606 ER - TY - JOUR A1 - Klebanov, Ilja A1 - Schuster, Ingmar T1 - Markov Chain Importance Sampling - a highly efficient estimator for MCMC JF - Journal of Computational and Graphical Statistics N2 - Markov chain (MC) algorithms are ubiquitous in machine learning and statistics and many other disciplines. Typically, these algorithms can be formulated as acceptance rejection methods. In this work we present a novel estimator applicable to these methods, dubbed Markov chain importance sampling (MCIS), which efficiently makes use of rejected proposals. For the unadjusted Langevin algorithm, it provides a novel way of correcting the discretization error. Our estimator satisfies a central limit theorem and improves on error per CPU cycle, often to a large extent. As a by-product it enables estimating the normalizing constant, an important quantity in Bayesian machine learning and statistics. Y1 - 2020 U6 - https://doi.org/10.1080/10618600.2020.1826953 ER - TY - JOUR A1 - Klebanov, Ilja A1 - Sprungk, Björn A1 - Sullivan, T. J. T1 - The linear conditional expectation in Hilbert space JF - Bernoulli Y1 - 2021 U6 - https://doi.org/10.3150/20-BEJ1308 VL - 27 IS - 4 SP - 2299 EP - 2299 ER - TY - JOUR A1 - Ayanbayev, Birzhan A1 - Klebanov, Ilja A1 - Lie, Han Cheng A1 - Sullivan, T. J. T1 - Γ-convergence of Onsager–Machlup functionals: I. With applications to maximum a posteriori estimation in Bayesian inverse problems JF - Inverse Problems Y1 - 2022 U6 - https://doi.org/10.1088/1361-6420/ac3f81 VL - 38 IS - 2 ER - TY - JOUR A1 - Ayanbayev, Birzhan A1 - Klebanov, Ilja A1 - Lie, Han Cheng A1 - Sullivan, T. J. T1 - Γ-convergence of Onsager–Machlup functionals: II. Infinite product measures on Banach spaces JF - Inverse Problems Y1 - 2022 U6 - https://doi.org/10.1088/1361-6420/ac3f82 VL - 38 IS - 2 ER - TY - GEN A1 - Klebanov, Ilja A1 - Sikorski, Alexander A1 - Schütte, Christof A1 - Röblitz, Susanna T1 - Empirical Bayes Methods, Reference Priors, Cross Entropy and the EM Algorithm N2 - When estimating a probability density within the empirical Bayes framework, the non-parametric maximum likelihood estimate (NPMLE) usually tends to overfit the data. This issue is usually taken care of by regularization - a penalization term is subtracted from the marginal log-likelihood before the maximization step, so that the estimate favors smooth solutions, resulting in the so-called maximum penalized likelihood estimation (MPLE). The majority of penalizations currently in use are rather arbitrary brute-force solutions, which lack invariance under transformation of the parameters(reparametrization) and measurements. This contradicts the principle that, if the underlying model has several equivalent formulations, the methods of inductive inference should lead to consistent results. Motivated by this principle and using an information-theoretic point of view, we suggest an entropy-based penalization term that guarantees this kind of invariance. The resulting density estimate can be seen as a generalization of reference priors. Using the reference prior as a hyperprior, on the other hand, is argued to be a poor choice for regularization. We also present an insightful connection between the NPMLE, the cross entropy and the principle of minimum discrimination information suggesting another method of inference that contains the doubly-smoothed maximum likelihood estimation as a special case. T3 - ZIB-Report - 16-56 KW - parameter estimation KW - Bayesian inference KW - Bayesian hierarchical modeling KW - hyperparameter KW - hyperprior KW - EM algorithm KW - NPMLE KW - MPLE KW - DS-MLE KW - principle of maximum entropy KW - cross entropy KW - minimum discrimination information KW - reference prior KW - Jeffreys prior Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-61230 SN - 1438-0064 ER -