[1908.11527] Implicit Deep Latent Variable Models for Text Generation

Deep latent variable models (LVM) such as variational auto-encoder (VAE) have recently played an important role in text generation. One key factor is the exploitation of smooth latent structures to guide the generation. However, the representation power of VAEs is limited due to two reasons: (1) the Gaussian assumption is often made on the variational posteriors; and meanwhile (2) a notorious "posterior collapse" issue occurs. In this paper, we advocate sample-based representations of variational distributions for natural language, leading to implicit latent features, which can provide flexible representation power compared with Gaussian-based posteriors. We further develop an LVM to directly match the aggregated posterior to the prior. It can be viewed as a natural extension of VAEs with a regularization of maximizing mutual information, mitigating the "posterior collapse" issue. We demonstrate the effectiveness and versatility of our models in various text generation scenarios, inclu

1 mentions: @DanilBaibak
Date: 2019/09/09 09:47

Referring Tweets

@DanilBaibak Implicit Deep Latent Variable Models for Text Generation #NeuralNetworks #DeepLearning #NLP #PyTorch Paper: https://t.co/exqothhJ5U Code: https://t.co/T9eDhqRI6l https://t.co/pVxmrzUaJt

Related Entries

Read more [DL輪読会]Adversarial Text Generation via Feature-Mover's Distance (NIPS…
0 users, 0 mentions 2018/11/12 11:14
Read more [DL輪読会]Adversarial Feature Matching for Text Generation
0 users, 0 mentions 2018/04/23 11:41
Read more Posting on ArXiv is good, flag planting notwithstanding. This piece by Yoav Goldberg has been widel...
0 users, 0 mentions 2018/04/22 03:40
Read more GitHub - google/sentencepiece: Unsupervised text tokenizer for Neural Network-based text generation.
99 users, 1 mentions 2019/01/27 02:16
Read more [1909.03622] Transfer Reward Learning for Policy Gradient-Based Text Generation
0 users, 1 mentions 2019/09/10 08:18