FreeLB: Enhanced Adversarial Training for Language Understanding | OpenReview

FreeLB: Enhanced Adversarial Training for Language Understanding Sep 25, 2019 ICLR 2020 Conference Blind Submission readers: everyone Show Bibtex Abstract: Adversarial training, which minimizes the maximal risk for label-preserving input perturbations, has proved to be effective for improving the generalization of language models. In this work, we propose a novel adversarial training algorithm - FreeLB, that promotes higher robustness and invariance in the embedding space, by adding advers

1 mentions: @ankesh_anand
Date: 2019/11/06 02:21

Referring Tweets

@ankesh_anand ICLR papers with perfect scores (all 8s, total 11 papers): 1. t.co/XOouanwJDI "FreeLB: Enhanced Adversarial Training for Language Understanding" 2. t.co/c1ORyFEhQA "BackPACK: Packing more into Backprop"

Related Entries

Read more ICLR2018参加報告第4回(強化学習と逆強化学習の他分野への応用) | DeepX AI Blog
0 users, 0 mentions 2018/08/10 09:23
Read more [ICLR2017読み会 @ DeNA] ICLR2017紹介
0 users, 0 mentions 2018/04/22 03:40
Read more ICLR2018参加報告第4回(強化学習と逆強化学習の他分野への応用) | DeepX AI Blog
0 users, 0 mentions 2018/08/10 12:00
Read more Program Synthesis Papers at ICLR 2018 – NEAR AI – Medium
0 users, 0 mentions 2018/05/06 20:28
Read more [DL輪読会]ICLR2020の分布外検知速報
0 users, 1 mentions 2019/09/27 06:49