TY - GEN A1 - Klebanov, Ilja A1 - Sikorski, Alexander A1 - Schütte, Christof A1 - Röblitz, Susanna T1 - Empirical Bayes Methods for Prior Estimation in Systems Medicine N2 - One of the main goals of mathematical modelling in systems medicine related to medical applications is to obtain patient-specific parameterizations and model predictions. In clinical practice, however, the number of available measurements for single patients is usually limited due to time and cost restrictions. This hampers the process of making patient-specific predictions about the outcome of a treatment. On the other hand, data are often available for many patients, in particular if extensive clinical studies have been performed. Therefore, before applying Bayes’ rule separately to the data of each patient (which is typically performed using a non-informative prior), it is meaningful to use empirical Bayes methods in order to construct an informative prior from all available data. We compare the performance of four priors - a non-informative prior and priors chosen by nonparametric maximum likelihood estimation (NPMLE), by maximum penalized lilelihood estimation (MPLE) and by doubly-smoothed maximum likelihood estimation (DS-MLE) - by applying them to a low-dimensional parameter estimation problem in a toy model as well as to a high-dimensional ODE model of the human menstrual cycle, which represents a typical example from systems biology modelling. T3 - ZIB-Report - 16-57 KW - Parameter estimation KW - Bayesian inference KW - Bayesian hierarchical modelling KW - NPMLE KW - MPLE KW - DS-MLE KW - EM algorithm KW - Jeffreys prior KW - reference prior KW - hyperparameter KW - hyperprior KW - principle of maximum entropy Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-61307 SN - 1438-0064 ER - TY - GEN A1 - Klebanov, Ilja A1 - Sikorski, Alexander A1 - Schütte, Christof A1 - Röblitz, Susanna T1 - Empirical Bayes Methods, Reference Priors, Cross Entropy and the EM Algorithm N2 - When estimating a probability density within the empirical Bayes framework, the non-parametric maximum likelihood estimate (NPMLE) usually tends to overfit the data. This issue is usually taken care of by regularization - a penalization term is subtracted from the marginal log-likelihood before the maximization step, so that the estimate favors smooth solutions, resulting in the so-called maximum penalized likelihood estimation (MPLE). The majority of penalizations currently in use are rather arbitrary brute-force solutions, which lack invariance under transformation of the parameters(reparametrization) and measurements. This contradicts the principle that, if the underlying model has several equivalent formulations, the methods of inductive inference should lead to consistent results. Motivated by this principle and using an information-theoretic point of view, we suggest an entropy-based penalization term that guarantees this kind of invariance. The resulting density estimate can be seen as a generalization of reference priors. Using the reference prior as a hyperprior, on the other hand, is argued to be a poor choice for regularization. We also present an insightful connection between the NPMLE, the cross entropy and the principle of minimum discrimination information suggesting another method of inference that contains the doubly-smoothed maximum likelihood estimation as a special case. T3 - ZIB-Report - 16-56 KW - parameter estimation KW - Bayesian inference KW - Bayesian hierarchical modeling KW - hyperparameter KW - hyperprior KW - EM algorithm KW - NPMLE KW - MPLE KW - DS-MLE KW - principle of maximum entropy KW - cross entropy KW - minimum discrimination information KW - reference prior KW - Jeffreys prior Y1 - 2016 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:0297-zib-61230 SN - 1438-0064 ER -