[2005.10002] Statistical learning theory of structured dataopen searchopen navigation menucontact arXivarXiv Twitter

The traditional approach of statistical physics to supervised learning routinely assumes unrealistic generative models for the data: usually inputs are independent random variables, uncorrelated with their labels. Only recently, statistical physicists started to explore more complex forms of data, such as equally-labelled points lying on (possibly low dimensional) object manifolds. Here we provide a bridge between this recently-established research area and the framework of statistical learning theory, a branch of mathematics devoted to inference in machine learning. The overarching motivation is the inadequacy of the classic rigorous results in explaining the remarkable generalization properties of deep learning. We propose a way to integrate physical models of data into statistical learning theory, and address, with both combinatorial and statistical mechanics methods, the computation of the Vapnik-Chervonenkis entropy, which counts the number of different binary classifications comp

3 mentions: @tjmlab
Date: 2020/05/21 02:21

Related Entries

Read more [1907.09987] Bayesian Inference with Generative Adversarial Network Priors
1 users, 3 mentions 2019/07/24 15:47
Read more [1908.10632] A Longitudinal Analysis of University Rankings
0 users, 6 mentions 2019/08/29 12:48
Read more [1909.05862] Learning Symbolic Physics with Graph Networks
0 users, 6 mentions 2019/09/16 06:48
Read more [1911.01429] The frontier of simulation-based inference
0 users, 4 mentions 2019/11/06 17:20
Read more [2002.10061] Rethinking 1D-CNN for Time Series Classification: A Stronger Baselinecontact arXivarXiv...
0 users, 3 mentions 2020/02/28 18:52