Improving Neural Topic Models using Knowledge Distillation - ACL Anthology

Improving Neural Topic Models using Knowledge Distillation - ACL Anthology

Alexander Miserlis Hoyle, Pranav Goel, Philip Resnik. Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). 2020.

2 mentions: @miserlis_
Date: 2020/12/17 08:21

Referring Tweets

@miserlis_ Our "SotA" model [1] is based on Scholar [2], an alternate NTM adapted with BERT-based knowledge distillation. Thread at [3] and code at [4] 6/10 [1]: t.co/lh98CJLZil [2]: t.co/rgAP0nB9im [3]: t.co/Vz6r1yqiy7 [4]: t.co/NZ3vYnKVOF

Related Entries

Read more [1703.04957] An algorithm for removing sensitive information: application to race-independent recidi...
0 users, 1 mentions 2020/01/31 14:21
Read more [1910.13461] BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Tran...
2 users, 1 mentions 2020/10/19 15:51
Read more tBERT: Topic Models and BERT Joining Forces for Semantic Similarity Detection - ACL Anthology
0 users, 2 mentions 2020/12/19 09:51
Read more Semi-supervised URL Segmentation with Recurrent Neural Networks Pre-trained on Knowledge Graph Entit...
0 users, 1 mentions 2021/01/05 17:21
Read more GitHub - adalmia96/Cluster-Analysis
0 users, 1 mentions 2021/01/06 09:51