[1703.00593] Positive-Unlabeled Learning with Non-Negative Risk Estimatoropen searchopen navigation menucontact arXivsubscribe to arXiv mailings

From only positive (P) and unlabeled (U) data, a binary classifier could be trained with PU learning, in which the state of the art is unbiased PU learning. However, if its model is very flexible, empirical risks on training data will go negative, and we will suffer from serious overfitting. In this paper, we propose a non-negative risk estimator for PU learning: when getting minimized, it is more robust against overfitting, and thus we are able to use very flexible models (such as deep neural networks) given limited P data. Moreover, we analyze the bias, consistency, and mean-squared-error reduction of the proposed risk estimator, and bound the estimation error of the resulting empirical risk minimizer. Experiments demonstrate that our risk estimator fixes the overfitting problem of its unbiased counterparts.

1 mentions: @shion_honda
Date: 2020/11/21 02:21

Referring Tweets

@shion_honda Positive-Unlabeled Learning with Non-Negative Risk Estimator [Kiryo+, 2017, NIPS] PU学習で複雑なモデルを使うと経験損失が負になり過学習を招く。非負な損失推定量と大規模学習の方法を提案し、複雑なモデルでもPU学習ができることを理論と実験で示した。 t.co/59Knp4jo6M #NowReading t.co/DQGcuWyYiW

Related Entries

Read more [1901.01250] Learning Graph Embedding with Adversarial Training Methods
0 users, 1 mentions 2019/03/02 09:47
Read more GitHub - Tiiiger/SGC: official implementation for the paper "Simplifying Graph Convolutional Network...
0 users, 1 mentions 2019/05/18 08:18
Read more [1903.11835] A Survey on Graph Kernels
0 users, 1 mentions 2019/07/18 03:46
Read more Sequence-Based Prediction of Protein-Protein Interactions by Means of Rotation Forest and Autocorrel...
0 users, 1 mentions 2019/10/14 14:18
Read more From Direct Method to Doubly Robust | Hippocampus's Garden
0 users, 1 mentions 2020/07/29 15:51
Read more [2006.09286] On the Computational Power of Transformers and its Implications in Sequence Modelingope...
0 users, 1 mentions 2020/10/17 14:22