Beat the AI: Investigating Adversarial Human Annotation for Reading Comprehension

Three New RC Datasets We have created three new Reading Comprehension datasets constructed using an adversarial model-in-the-loop. We use three different models; BiDAF (Seo et al., 2016), BERTLarge (Devlin et al., 2018), and RoBERTaLarge (Liu et al., 2019) in the annotation loop and construct three datasets; D(BiDAF), D(BERT), and D(RoBERTa), each with 10,000 training examples, 1,000 validation, and 1,000 test examples. The adversarial human annotation paradigm ensures that these datasets

1 mentions: @max_nlp
Keywords: annotation
Date: 2021/01/13 17:21

Referring Tweets

@max_nlp How well do your RC models perform on more challenging questions? adversarialQA ( is now available for easy access in @huggingface datasets!

Related Entries

Read more Pythonによる機械学習入門 ~SVMからDeep Learningまで~
0 users, 0 mentions 2018/04/22 03:41
Read more Chatbot Tutorial — PyTorch Tutorials 1.0.0.dev20181128 documentation
0 users, 0 mentions 2018/10/16 11:15
Read more Kaggle NotebookのCommit方法 / How to commit Kaggle Notebook(April 3rd 2020) - YouTube
0 users, 0 mentions 2020/04/03 15:51
Read more Retina-Like Visual Image Reconstruction via Spiking Neural Model - YouTube
0 users, 0 mentions 2020/07/17 00:52
Read more ALC-Adv-Attack
0 users, 0 mentions 2020/07/27 18:52