[2007.03898] NVAE: A Deep Hierarchical Variational Autoencoderopen searchopen navigation menucontact arXivsubscribe to arXiv mailings

Normalizing flows, autoregressive models, variational autoencoders (VAEs), and deep energy-based models are among competing likelihood-based frameworks for deep generative learning. Among them, VAEs have the advantage of fast and tractable sampling and easy-to-access encoding networks. However, they are currently outperformed by other models such as normalizing flows and autoregressive models. While the majority of the research in VAEs is focused on the statistical challenges, we explore the orthogonal direction of carefully designing neural architectures for hierarchical VAEs. We propose Nouveau VAE (NVAE), a deep hierarchical VAE built for image generation using depth-wise separable convolutions and batch normalization. NVAE is equipped with a residual parameterization of Normal distributions and its training is stabilized by spectral regularization. We show that NVAE achieves state-of-the-art results among non-autoregressive likelihood-based models on the MNIST, CIFAR-10, and CelebA

14 mentions: @ArashVahdat@hillbig@mosko_mule@AkiraTOSEI@raghavian@hillbig@KevinKaichuang@AlisonBLowndes
Date: 2020/07/10 00:53

Referring Tweets

@hillbig NVAEはVAEアーキテクチャを改良、他の尤度ベース生成モデルに匹敵する高解像度かつ高品質な画像生成が可能: depthwise-conv, swish, SE, spectral正規化(安定化に重要)を利用, BNのモーメンタムを変更, posteriorをpriorからの差分で表す, 符号化器で正規化フローを使う t.co/H8y0g3ChbL
@AkiraTOSEI t.co/9nd00ZTCII 階層的なVAEで高精細画像を生成する研究。 各階層で直接分散・平均を計算させるのではなく、前の層の相対平均等を加味した分布を設計している。また、swish活性化SEモジュール、spectral norm、depth-wise convを用いて計算量を削減しつつ視野を広げるなど様々な工夫がある t.co/BYyUHBQjIv
@ArashVahdat 📢📢📢 Introducing NVAE 📢📢📢 We show that deep hierarchical VAEs w/ carefully designed network architecture, generate high-quality images & achieve SOTA likelihood, even when trained w/ original VAE loss. paper: t.co/L4GuiIKci8 with @jankautz at @NVIDIAAI (1/n) t.co/g6GQT7jkdC
@raghavian Seems like this will be the week VAEs will become great, all over again (not that they ever stopped being awesome)! SurVAE [1] bridged VAEs and flow-based models. NVAE [2] with impressive results and couple of neat tricks. [1] t.co/CjFIVPfw9d [2] t.co/2CUj3Dnqf2 t.co/BMqxwudXKK
@AlisonBLowndes Listening to Prof @fdellaert @GaTech talk about blending factor graphs with VAEs on #drones @rssconf @RoboticsSciSys - try it with @NVIDIAAI's t.co/uMpSPwXlUe too. I'm working on putting this on Mars too 🚀#robotics
@KevinKaichuang Cool hierarchical VAE architecture (and some training tricks too!) to generate perceptually realistic images. t.co/uiiG36FGH0 @ArashVahdat @jankautz t.co/vifQ6IOeOw t.co/Op06ZjhQbL
@hillbig NVAE is VAE with new architecture, producing high-quality, high-res images; use depthwise-conv, swish, SE, spectral normalization (for stabilization), modify momentum parameters of BN, parameterize posterior relative to prior, approx. posterior with NF. t.co/H8y0g3ChbL
@AkiraTOSEI t.co/9nd00ZTCII This is a study of hierarchical VAE to generate high-definition images. Instead of having the variance and average calculated directly at each stage, they design the distribution taking into account the relative average of the previous layer. t.co/FhJYCYHmDL
@mosko_mule NVAE: A Deep Hierarchical Variational Autoencoder t.co/nkSO4zeDjk ボケない巨大VAE.VAEを高度に階層化しRes構造,BN,SE,swishを盛った構造を導入,残差正規分布,スペクトラル正則化,Normalizing Flowによる事後分布の近似による学習の安定化によって高解像度のVAEを実現. t.co/8Kt7MCHsEn

Related Entries

Read more When Robustness Doesn’t Promote Robustness: Synthetic vs. Natural Distribution Shifts on ImageNet | ...
0 users, 1 mentions 2019/10/07 03:49
Read more [1908.05672] Towards Making the Most of BERT in Neural Machine Translationcontact arXivarXiv Twitter
0 users, 1 mentions 2020/03/27 17:20
Read more [2003.01690] Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-fre...
0 users, 2 mentions 2020/03/31 23:21
Read more [2004.09984] BERT-ATTACK: Adversarial Attack Against BERT Using BERTopen searchopen navigation menuc...
0 users, 1 mentions 2020/04/22 23:22
Read more [2004.11362] Supervised Contrastive Learningopen searchopen navigation menucontact arXivarXiv Twitte...
1 users, 15 mentions 2020/04/24 05:24