[2005.00770] Exploring and Predicting Transferability across NLP Tasksopen searchopen navigation menucontact arXivsubscribe to arXiv mailings

Recent advances in NLP demonstrate the effectiveness of training large-scale language models and transferring them to downstream tasks. Can fine-tuning these models on tasks other than language modeling further improve performance? In this paper, we conduct an extensive study of the transferability between 33 NLP tasks across three broad classes of problems (text classification, question answering, and sequence labeling). Our results show that transfer learning is more beneficial than previously thought, especially when target task data is scarce, and can improve performance even when the source task is small or differs substantially from the target task (e.g., part-of-speech tagging transfers well to the DROP QA dataset). We also develop task embeddings that can be used to predict the most transferable source tasks for a given target task, and we validate their effectiveness in experiments controlled for source and target data size. Overall, our experiments reveal that factors such as

3 mentions: @colinraffel@KevinKaichuang
Keywords: nlp
Date: 2020/06/28 06:52

Referring Tweets

@colinraffel I somehow missed this great paper by @tuvuumass et al.: t.co/QKcnU0CtNq They learn "task embeddings" (a la task2vec) for NLP tasks and show how they can be used to predict the effectiveness of intermediate-task transfer. Lots of experiments and a promising direction!
@KevinKaichuang Learn task embeddings to predict which source tasks will improve performance on which target tasks. @tuvuumass @aptrizzle @murefil @majisubhransu @mohitiyyer t.co/VAGwGUUhgK t.co/fiVIMIatmF

Related Entries

Read more [1912.12834] Randomly Projected Additive Gaussian Processes for Regressioncontact arXivarXiv Twitter
0 users, 3 mentions 2020/01/03 02:21
Read more [2002.07106] Controlling Computation versus Quality for Neural Sequence Modelscontact arXivarXiv Twi...
0 users, 3 mentions 2020/02/23 11:20
Read more [2002.10376] The Two Regimes of Deep Network Trainingcontact arXivarXiv Twitter
0 users, 3 mentions 2020/03/01 02:20
Read more [2002.08926] Imputer: Sequence Modelling via Imputation and Dynamic Programmingopen searchopen navig...
0 users, 2 mentions 2020/04/29 06:52
Read more [2005.03692] A Systematic Assessment of Syntactic Generalization in Neural Language Modelsopen searc...
0 users, 4 mentions 2020/05/11 14:21