[1907.02549] Measuring the Data Efficiency of Deep Learning Methods

In this paper, we propose a new experimental protocol and use it to benchmark the data efficiency --- performance as a function of training set size --- of two deep learning algorithms, convolutional neural networks (CNNs) and hierarchical information-preserving graph-based slow feature analysis (HiGSFA), for tasks in classification and transfer learning scenarios. The algorithms are trained on different-sized subsets of the MNIST and Omniglot data sets. HiGSFA outperforms standard CNN networks when the models are trained on 50 and 200 samples per class for MNIST classification. In other cases, the CNNs perform better. The results suggest that there are cases where greedy, locally optimal bottom-up learning is equally or more powerful than global gradient-based learning.

1 mentions: @shion_honda
Keywords: deep learning
Date: 2019/07/24 08:16

Referring Tweets

@shion_honda Measuring the Data Efficiency of DL [Hlynsson, 2019, ICPRAM] DLモデルのデータ効率に注目するため、性能を訓練サンプル数の関数として評価することを提案した。CNNとHiGSFAを、MNISTとOmniglotでサンプル数を変えながら学習させる実験で性能を比較した。 t.co/RtP64TH79b #NowReading t.co/Gs8XjK7ptD

Related Entries

Read more Generative Query Networks - YouTube
1 users, 1 mentions 2019/02/22 06:47
Read more Home | Hippocampus's Garden
0 users, 1 mentions 2020/07/23 06:52
Read more Custom Objective for LightGBM | Hippocampus's Garden
1 users, 1 mentions 2020/11/22 09:51
Read more How I Built 🍣This Sushi Does Not Exist🍣 | Hippocampus's Garden
0 users, 1 mentions 2020/12/19 14:21
Read more Creating a Face Swapping Model in 10 Minutes | Hippocampus's Garden
0 users, 1 mentions 2021/01/13 15:51