[1910.13616] Multimodal Model-Agnostic Meta-Learning via Task-Aware Modulation

Model-agnostic meta-learners aim to acquire meta-learned parameters from similar tasks to adapt to novel tasks from the same distribution with few gradient updates. With the flexibility in the choice of models, those frameworks demonstrate appealing performance on a variety of domains such as few-shot image classification and reinforcement learning. However, one important limitation of such frameworks is that they seek a common initialization shared across the entire task distribution, substantially limiting the diversity of the task distributions that they are able to learn from. In this paper, we augment MAML with the capability to identify the mode of tasks sampled from a multimodal task distribution and adapt quickly through gradient updates. Specifically, we propose a multimodal MAML (MMAML) framework, which is able to modulate its meta-learned prior parameters according to the identified mode, allowing more efficient fast adaptation. We evaluate the proposed model on a diverse se

1 mentions: @AkiraTOSEI
Keywords: multimodal
Date: 2019/11/07 12:51

Referring Tweets

@AkiraTOSEI t.co/jLe9eXvLUS MAML等メタ学習ではタスク共通の初期値を求めることを目的とするが、単一の初期値でfew shot学習を行うのではなく、タスクのモード特定→モード毎に最適化、とすることで、より優れたメタ学習を行う研究。 RLと回帰でモードの分離がよくできており、MAML等より精度が良い t.co/JovwnLD5uZ

Related Entries

Read more Multimodal Information Fusion for Prohibited Items Detection - Mercari Engineering Blog
2 users, 6 mentions 2019/09/12 04:00
Read more [1806.06176] Learning Factorized Multimodal Representations
0 users, 1 mentions 2019/03/09 20:18
Read more Pythia: open-source framework for multimodal AI models - Facebook Code
6 users, 59 mentions 2019/05/21 12:00
Read more Using neural architecture search to automate multimodal modeling for prohibited item detection - Mer...
6 users, 9 mentions 2019/04/26 07:30
Read more [1704.08424] Multimodal Word Distributions
3 users, 1 mentions 2019/02/26 08:17