[1804.01947] Sliced-Wasserstein Autoencoder: An Embarrassingly Simple Generative Model

In this paper we study generative modeling via autoencoders while using the elegant geometric properties of the optimal transport (OT) problem and the Wasserstein distances. We introduce Sliced-Wasserstein Autoencoders (SWAE), which are generative models that enable one to shape the distribution of the latent space into any samplable probability distribution without the need for training an adversarial network or defining a closed-form for the distribution. In short, we regularize the autoencoder loss with the sliced-Wasserstein distance between the distribution of the encoded training samples and a predefined samplable distribution. We show that the proposed formulation has an efficient numerical solution that provides similar capabilities to Wasserstein Autoencoders (WAE) and Variational Autoencoders (VAE), while benefiting from an embarrassingly simple implementation.

1 mentions: @chrofieyue
Keywords: autoencoder
Date: 2019/09/13 15:48

Referring Tweets

@chrofieyue @yoshipon0520 ありがとうございます!オートエンコーダとか作れるみたいです t.co/Yw4xWrvaY1

Bookmark Comments

Related Entries

Read more Deep Learningを用いた教師なし画像検査の論文調査 GAN/SVM/Autoencoderとか .pdf
0 users, 0 mentions 2018/10/09 06:53
Read more Variational Autoencoder in Tensorflow - facial expression low dimensional embedding - Machine learni...
0 users, 0 mentions 2018/04/22 03:40
Read more 時系列データでVariational AutoEncoder keras - 機械学習を学習する天然ニューラルネットワーク
0 users, 0 mentions 2018/09/23 06:23
Read more 猫でも分かるVariational AutoEncoder
0 users, 0 mentions 2018/06/15 04:30
Read more Intuitively Understanding Variational Autoencoders – Towards Data Science
0 users, 0 mentions 2018/07/08 12:23