[2102.09310] VAE Approximation Error: ELBO and Conditional Independence

The importance of Variational Autoencoders reaches far beyond standalone generative models -- the approach is also used for learning latent representations and can be generalized to semi-supervised learning. This requires a thorough analysis of their commonly known shortcomings: posterior collapse and approximation errors. This paper analyzes VAE approximation errors caused by the combination of the ELBO objective with the choice of the encoder probability family, in particular under conditional independence assumptions. We identify the subclass of generative models consistent with the encoder family. We show that the ELBO optimizer is pulled from the likelihood optimizer towards this consistent subset. Furthermore, this subset can not be enlarged, and the respective error cannot be decreased, by only considering deeper encoder networks.

Keywords: vae
Date: 2021/02/19 15:51

Related Entries

Read more Transformerによる時系列データ予測のご紹介 - Platinum Data Blog by BrainPad
38 users, 26 mentions 2021/02/17 05:00
Read more Kaggle で10年遊んだ GrandMaster の振り返り
183 users, 26 mentions 2021/02/18 02:21
Read more 経済産業省のAIコンペに参加しました|Norikazu|note
0 users, 2 mentions 2021/02/22 23:21
Read more NFNet(バッチ正規化を用いない高精度画像認識)の元論文を解説 | 医師によるAI開発
1 users, 2 mentions 2021/02/21 00:46
Read more GitHub - vaaaaanquish/Awesome-Rust-MachineLearning: Awesome Rust Machine Learning crate list
0 users, 0 mentions 2021/02/25 12:53