[1603.07285] A guide to convolution arithmetic for deep learning

We introduce a guide to help deep learning practitioners understand and manipulate convolutional neural network architectures. The guide clarifies the relationship between various properties (input shape, kernel shape, zero padding, strides and output shape) of convolutional, pooling and transposed convolutional layers, as well as the relationship between convolutional and transposed convolutional layers. Relationships are derived for various cases, and are illustrated in order to make them intuitive.

2 mentions: @sei_shinagawa@taku_buntu
Keywords: deep learning
Date: 2019/03/03 08:17

Referring Tweets

@sei_shinagawa DNNのDeconvolutionについては ①既に信号処理の画像修復で使われてて、Deep learningにおけるup samplingの操作(convolutionの逆操作)が同じ名前なのは良くない ②実際の操作はconvの転置で表現できる(t.co/9KDWG4ma4v, P.20) からtransposed convolutionと呼ぶのが適切という話だったような t.co/f1YztjGndX
@taku_buntu 様々な畳み込みの方法がわかりやすい図でまとめられている。DenseNetやらの構造のことではなく、様々なパディングの方法やらが書かれている。 t.co/zK4bsgO14l

Related Entries

Read more Word2Vec (skip-gram model): PART 1 - Intuition. – Towards Data Science
0 users, 0 mentions 2018/04/22 03:41
Read more ニューラルネットワークの量子化についての最近の研究の進展と、その重要性 - SmartNews Engineering Blog
489 users, 6 mentions 2019/02/17 12:47
Read more [1808.06670] Learning deep representations by mutual information estimation and maximization
3 users, 2 mentions 2019/05/16 17:16
Read more How I trained a self-supervised neural network to beat GnuGo on small (7x7) boards
1 users, 0 mentions 2020/01/25 00:14
Read more NISTIR 8269 (Draft), A Taxonomy and Terminology of Adversarial Machine Learning | CSRC
0 users, 1 mentions 2020/07/11 17:21