[1803.01271] An Empirical Evaluation of Generic Convolutional and Recurrent Networks for Sequence Modelingcontact arXivarXiv Twitter

For most deep learning practitioners, sequence modeling is synonymous with recurrent networks. Yet recent results indicate that convolutional architectures can outperform recurrent networks on tasks such as audio synthesis and machine translation. Given a new sequence modeling task or dataset, which architecture should one use? We conduct a systematic evaluation of generic convolutional and recurrent architectures for sequence modeling. The models are evaluated across a broad range of standard tasks that are commonly used to benchmark recurrent networks. Our results indicate that a simple convolutional architecture outperforms canonical recurrent networks such as LSTMs across a diverse range of tasks and datasets, while demonstrating longer effective memory. We conclude that the common association between sequence modeling and recurrent networks should be reconsidered, and convolutional networks should be regarded as a natural starting point for sequence modeling tasks. To assist relat

2 mentions: @yellowshippo@go_new_innov
Date: 2020/02/27 23:20

Referring Tweets

@yellowshippo 時系列データにおいて、さまざまなタスクで RNN 系のネットワークより CNN 系のネットワークのほうが性能がいいという実験的な結果。 t.co/Co7iAYodKM

Bookmark Comments

Related Entries

Read more [1610.02915] Deep Pyramidal Residual Networks
8 users, 1 mentions 2019/04/03 08:17
Read more [1609.04468] Sampling Generative Networksopen searchopen navigation menucontact arXivarXiv Twitter
2 users, 0 mentions 2020/04/28 15:50
Read more Unsupervised Learning of Depth and Ego-Motion from Video
2 users, 1 mentions 2020/05/04 00:52
Read more [1701.06538] Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layeropen sea...
10 users, 2 mentions 2020/07/01 03:51
Read more [1602.04938] "Why Should I Trust You?": Explaining the Predictions of Any Classifieropen searchopen ...
13 users, 4 mentions 2020/07/26 17:21