[1910.13461] BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehensionopen searchopen navigation menucontact arXivsubscribe to arXiv mailings

We present BART, a denoising autoencoder for pretraining sequence-to-sequence models. BART is trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. It uses a standard Tranformer-based neural machine translation architecture which, despite its simplicity, can be seen as generalizing BERT (due to the bidirectional encoder), GPT (with the left-to-right decoder), and many other more recent pretraining schemes. We evaluate a number of noising approaches, finding the best performance by both randomly shuffling the order of the original sentences and using a novel in-filling scheme, where spans of text are replaced with a single mask token. BART is particularly effective when fine tuned for text generation but also works well for comprehension tasks. It matches the performance of RoBERTa with comparable training resources on GLUE and SQuAD, achieves new state-of-the-art results on a range of abstractive dialogue, question

Date: 2020/10/19 15:51

Related Entries

Read more [1703.04957] An algorithm for removing sensitive information: application to race-independent recidi...
0 users, 1 mentions 2020/01/31 14:21
Read more Improving Neural Topic Models using Knowledge Distillation - ACL Anthology
0 users, 2 mentions 2020/12/17 08:21
Read more Semi-supervised URL Segmentation with Recurrent Neural Networks Pre-trained on Knowledge Graph Entit...
0 users, 1 mentions 2021/01/05 17:21
Read more GitHub - adalmia96/Cluster-Analysis
0 users, 1 mentions 2021/01/06 09:51
Read more Dataset Condensation with Gradient Matching | OpenReview
0 users, 1 mentions 2021/01/17 06:52