Theoretical Analysis of Self-Training with Deep Networks on Unlabeled Data | OpenReview

Theoretical Analysis of Self-Training with Deep Networks on Unlabeled Data | OpenReview

Self-training algorithms, which train a model to fit pseudolabels predicted by another previously-learned model, have been very successful for learning with unlabeled data using neural networks....

2 mentions: @hillbig@hillbig
Date: 2021/01/22 00:53

Referring Tweets

@hillbig 学習済みモデルでオーグメンテーション付教師なしデータに疑似ラベルを付与し学習する自己学習がなぜ性能が改善できるかについて、クラスの条件つけ確率分布がデータ周辺が接続しているという"拡張"仮説を導入すると多くの理論保証を与えられる。理解へのとても大きな前進 t.co/VkTlxzyKR2
@hillbig A great first result explaining why self-training (train a model on augmented unsupervised data labeled by another model; VAT, noisy student, etc.) is effective, under "expansion" assumption; class-conditional distribution is connected around data. t.co/VkTlxzyKR2

Related Entries

Read more struct2depth
1 users, 2 mentions 2019/06/24 05:15
Read more [1901.05353] A Primer on PAC-Bayesian Learning
1 users, 2 mentions 2019/09/17 00:49
Read more [1907.04490] Deep Lagrangian Networks: Using Physics as Model Prior for Deep Learning
0 users, 2 mentions 2019/10/23 23:19
Read more [2002.09073] Improved guarantees and a multiple-descent curve for Column Subset Selection and the Ny...
0 users, 3 mentions 2020/12/14 00:52
Read more Long-tail learning via logit adjustment | OpenReview
0 users, 3 mentions 2021/01/19 00:52