[1712.00409] Deep Learning Scaling is Predictable, Empirically

Deep learning (DL) creates impactful advances following a virtuous recipe: model architecture search, creating large training data sets, and scaling computation. It is widely believed that growing training sets and models should improve accuracy and result in better products. As DL application domains grow, we would like a deeper understanding of the relationships between training set size, computational scale, and model accuracy improvements to advance the state-of-the-art. This paper presents a large scale empirical characterization of generalization error and model size growth as training sets grow. We introduce a methodology for this measurement and test four machine learning domains: machine translation, language modeling, image processing, and speech recognition. Our empirical results show power-law generalization error scaling across a breadth of factors, resulting in power-law exponents---the "steepness" of the learning curve---yet to be explained by theoretical work. Further...

1 mentions: @Miles_Brundage
Keywords: deep learning
Date: 2019/06/09 23:15

Referring Tweets

@Miles_Brundage First, consider "Deep Learning Scaling is Predictable, Empirically" by Hestness et al. at Baidu - https://t.co/wWim0MVP2P Fantastic paper that shows clear empirical tendencies distinguishing different ML domains w.r.t. returns to data, with a common theme of log-linear returns.

Related Entries

Read more Deep Learning for NLP Best Practices
Read more The major advancements in Deep Learning in 2016 | Tryolabs Blog
Read more Deep Learningを用いた教師なし画像検査の論文調査 GAN/SVM/Autoencoderとか .pdf
Read more データマイニングコンペティションサイト Kaggle にも Deep Learning ブームがきてるかと思ったのでまとめる - 糞糞糞ネット弁慶
Read more Chainerで学ぶdeep learning