Are Pre-trained Language Models Aware of Phrases? Simple but Strong Baselines for Grammar Induction | OpenReview

Are Pre-trained Language Models Aware of Phrases? Simple but Strong Baselines for Grammar Induction Sep 25, 2019 Blind Submission readers: everyone Show Bibtex Abstract: With the recent success and popularity of pre-trained language models (LMs) in natural language processing, there has been a rise in efforts to understand their inner workings. In line with such interest, we propose a novel method that assists us in investigating the extent to which pre-trained LMs capture the syntactic no

1 mentions: @otakumesi
Date: 2020/02/10 12:54

Referring Tweets

@otakumesi ICLRの論文覗いてら、読まなきゃいけなさそうなの見つけた t.co/3jI1Qqzqx8

Related Entries

Read more GitHub - neulab/lrlm: Code for the paper "Latent Relation Language Models" at AAAI-20.
0 users, 2 mentions 2020/02/10 12:54
Read more 子どもの言語獲得のモデル化とNN Language ModelsNN
0 users, 0 mentions 2018/10/05 03:23
Read more Generalized Language Models
1 users, 24 mentions 2019/02/03 02:18
Read more GitHub - facebookresearch/XLM: PyTorch original implementation of Cross-lingual Language Model Pretr...
0 users, 4 mentions 2019/09/03 17:17
Read more GitHub - facebookresearch/XLM: PyTorch original implementation of Cross-lingual Language Model Pretr...
0 users, 10 mentions 2019/08/21 02:16