TY - JOUR A1 - Lavda, Frantzeska A1 - Gregorová, Magda A1 - Kalousis, Alexandros T1 - Data-Dependent Conditional Priors for Unsupervised Learning of Multimodal Data JF - Entropy N2 - One of the major shortcomings of variational autoencoders is the inability to produce generations from the individual modalities of data originating from mixture distributions. This is primarily due to the use of a simple isotropic Gaussian as the prior for the latent code in the ancestral sampling procedure for data generations. In this paper, we propose a novel formulation of variational autoencoders, conditional prior VAE (CP-VAE), with a two-level generative process for the observed data where continuous 𝐳 and a discrete 𝐜 variables are introduced in addition to the observed variables 𝐱. By learning data-dependent conditional priors, the new variational objective naturally encourages a better match between the posterior and prior conditionals, and the learning of the latent categories encoding the major source of variation of the original data in an unsupervised manner. Through sampling continuous latent code from the data-dependent conditional priors, we are able to generate new samples from the individual mixture components corresponding, to the multimodal structure over the original data. Moreover, we unify and analyse our objective under different independence assumptions for the joint distribution of the continuous and discrete latent variables. We provide an empirical evaluation on one synthetic dataset and three image datasets, FashionMNIST, MNIST, and Omniglot, illustrating the generative performance of our new model comparing to multiple baselines. Y1 - 2020 U6 - https://doi.org/10.3390/e22080888 VL - 22 IS - 8 SP - 888 EP - 888 ER - TY - CHAP A1 - Lavda, Frantzeska A1 - Gregorová, Magda A1 - Kalousis, Alexandros ED - De Giacomo, Giuseppe ED - Catalá, Alejandro ED - Dilkina, Bistra ED - Milano, Michela ED - Barro, Senén ED - Bugarı́n, Alberto ED - Lang, Jérôme T1 - Improving VAE Generations of Multimodal Data Through Data-Dependent Conditional Priors T2 - ECAI 2020 - 24th European Conference on Artificial Intelligence, 29 August-8 September 2020, Santiago de Compostela, Spain, August 29 - September 8, 2020 - Including 10th Conference on Prestigious Applications of Artificial Intelligence (PAIS 2020) N2 - One of the major shortcomings of variational autoencoders is the inability to produce generations from the individual modalities of data originating from mixture distributions. This is primarily due to the use of a simple isotropic Gaussian as the prior for the latent code in the ancestral sampling procedure for the data generations. We propose a novel formulation of variational autoencoders, conditional prior VAE (CP-VAE), which learns to differentiate between the individual mixture components and therefore allows for generations from the distributional data clusters. We assume a two-level generative process with a continuous (Gaussian) latent variable sampled conditionally on a discrete (categorical) latent component. The new variational objective naturally couples the learning of the posterior and prior conditionals, and the learning of the latent categories encoding the multimodality of the original data in an unsupervised manner. The data-dependent conditional priors are then used to sample the continuous latent code when generating new samples from the individual mixture components corresponding to the multimodal structure of the original data. Our experimental results illustrate the generative performance of our new model comparing to multiple baselines. Y1 - 2020 U6 - https://doi.org/10.3233/FAIA200226 VL - 325 SP - 1254 EP - 1261 ER - TY - JOUR A1 - Lavda, Frantzeska A1 - Gregorová, Magda A1 - Kalousis, Alexandros T1 - Improving VAE generations of multimodal data through data-dependent conditional priors JF - CoRR Y1 - 2019 VL - abs/1911.10885 ER - TY - JOUR A1 - Lavda, Frantzeska A1 - Ramapuram, Jason A1 - Gregorová, Magda A1 - Kalousis, Alexandros T1 - Continual Classification Learning Using Generative Models JF - CoRR N2 - Continual learning is the ability to sequentially learn over time by accommodating knowledge while retaining previously learned experiences. Neural networks can learn multiple tasks when trained on them jointly, but cannot maintain performance on previously learned tasks when tasks are presented one at a time. This problem is called catastrophic forgetting. In this work, we propose a classification model that learns continuously from sequentially observed tasks, while preventing catastrophic forgetting. We build on the lifelong generative capabilities of [10] and extend it to the classification setting by deriving a new variational bound on the joint log likelihood, $\log p(x; y)$. Y1 - 2018 U6 - https://doi.org/10.48550/arXiv.1810.10612 VL - abs/1810.10612 ER -