How good is the Bayes posterior for prediction really? « Statistical Modeling, Causal Inference, and Social Science

Posted by  on It might not be common courtesy of this blog to make comments on a very-recently-arxiv-ed paper. But I have seen two copies of this paper entitled “how good is the Bayes posterior in deep neural networks really” left on the tray of the department printer during the past weekend, so I cannot underestimate the popularity of the work. The paper argues that in a deep neural network, for prediction purposes, the full posterior yields a worse accuracy/cross-extropy than the one from th

5 mentions: @zov911
Date: 2020/02/14 06:51

Related Entries

Read more Notes on the Limitations of the Empirical Fisher Approximation
1 users, 3 mentions 2019/06/06 14:23
Read more Implementing Object Detection and Instance Segmentation for Data Scientists
0 users, 3 mentions 2019/12/06 00:00
Read more Become a Data Scientist in 2020 with these 10 resources
0 users, 2 mentions 2020/02/21 00:00
Read more 100x faster Hyperparameter Search Framework with Pyspark
0 users, 2 mentions 2020/02/22 00:00
Read more Minimal Pandas Subset for Data Scientists on GPU
0 users, 2 mentions 2020/02/22 00:00