Are Pre-trained Language Models Aware of Phrases? Simple but Strong Baselines for Grammar Induction | OpenReview

Are Pre-trained Language Models Aware of Phrases? Simple but Strong Baselines for Grammar Induction Sep 25, 2019 Blind Submission readers: everyone Show Bibtex Abstract: With the recent success and popularity of pre-trained language models (LMs) in natural language processing, there has been a rise in efforts to understand their inner workings. In line with such interest, we propose a novel method that assists us in investigating the extent to which pre-trained LMs capture the syntactic no

1 mentions: @otakumesi
Date: 2020/02/10 12:54

Referring Tweets

@otakumesi ICLRの論文覗いてら、読まなきゃいけなさそうなの見つけた t.co/3jI1Qqzqx8

Related Entries

Read more [1904.12324] OPIEC: An Open Information Extraction Corpuscontact arXivarXiv Twitter
0 users, 1 mentions 2020/03/20 00:51
Read more [1911.01485] Assessing Social and Intersectional Biases in Contextualized Word Representationsopen s...
0 users, 1 mentions 2020/08/22 11:21
Read more nlp — nlp 0.4.0 documentation
0 users, 1 mentions 2020/08/27 00:52
Read more GitHub - ResponsiblyAI/responsibly: Toolkit for Auditing and Mitigating Bias and Fairness of Machine...
0 users, 1 mentions 2020/08/27 05:21
Read more GitHub - rudinger/winogender-schemas: Data for evaluating gender bias in coreference resolution syst...
0 users, 1 mentions 2020/09/08 08:21