[2010.05767] Discrete Latent Space World Models for Reinforcement Learningopen searchopen navigation menucontact arXivsubscribe to arXiv mailings

Sample efficiency remains a fundamental issue of reinforcement learning. Model-based algorithms try to make better use of data by simulating the environment with a model. We propose a new neural network architecture for world models based on a vector quantized-variational autoencoder (VQ-VAE) to encode observations and a convolutional LSTM to predict the next embedding indices. A model-free PPO agent is trained purely on simulated experience from the world model. We adopt the setup introduced by Kaiser et al. (2020), which only allows 100K interactions with the real environment, and show that we reach better performance than their SimPLe algorithm in five out of six randomly selected Atari environments, while our model is significantly smaller.

1 mentions: @hardmaru
Date: 2020/10/14 06:53

Referring Tweets

@hardmaru Discrete Latent Space World Models for Reinforcement Learning By training PPO inside a simple and small world model consisting of LSTM predicting VQ-VAE codes, they get good Atari performance vs the much larger SimPLe(2019) model within 100K interactions t.co/rx9MfIhLvM t.co/q30arZuRvR

Related Entries

Read more GitHub - AdeelMufti/DifferentiableNeuralComputer: Optimized Differentiable Neural Computer In Chaine...
0 users, 1 mentions 2019/04/16 02:18
Read more [2002.00632] Effective Diversity in Population-Based Reinforcement Learningcontact arXivarXiv Twitte...
0 users, 1 mentions 2020/02/14 06:51
Read more Vision for Agriculture - Dataset
0 users, 1 mentions 2020/02/27 12:52
Read more [2002.09604] Emergent Communication with World Modelscontact arXivarXiv Twitter
0 users, 1 mentions 2020/03/02 08:20
Read more [2003.05325] Meta-learning curiosity algorithmscontact arXivarXiv Twitter
0 users, 1 mentions 2020/03/27 06:51