[1708.07860] Multi-task Self-Supervised Visual Learningcontact arXivarXiv Twitter

We investigate methods for combining multiple self-supervised tasks--i.e., supervised tasks where data can be collected without manual labeling--in order to train a single visual representation. First, we provide an apples-to-apples comparison of four different self-supervised tasks using the very deep ResNet-101 architecture. We then combine tasks to jointly train a network. We also explore lasso regularization to encourage the network to factorize the information in its representation, and methods for "harmonizing" network inputs in order to learn a more unified representation. We evaluate all methods on ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our results show that deeper networks work better, and that combining tasks--even via a naive multi-head architecture--always improves performance. Our best joint network nearly matches the PASCAL performance of a model pre-trained on ImageNet classification, and matches the ImageNet network on NYU depth predict

1 mentions: @OriolVinyalsML
Date: 2020/02/14 18:52

Referring Tweets

@OriolVinyalsML Rapid unsupervised learning progress thanks to contrastive losses, approaching supervised learning! -40% Multitask SSL t.co/3HXS53l71v (2017) -50% CPC t.co/k55jv1bAj3 (2018) -70% AMDIM/MOCO/CPCv2/etc (2019) -76.5% SimCLR t.co/y7WitlTOl7 (2020, so far) t.co/z1Q1yPi9pO

Related Entries

Read more AI Foundation - Research Scientist (NLP)
0 users, 1 mentions 2020/02/10 20:21
Read more Statistical Machine Learning Group
0 users, 1 mentions 2020/02/17 20:20
Read more Stagnation and Scientific Incentives
0 users, 4 mentions 2020/02/20 17:20
Read more CS 165
0 users, 1 mentions 2020/04/12 02:21
Read more Generation and Generalization
0 users, 5 mentions 2020/04/28 09:51