[1909.03186] On Extractive and Abstractive Neural Document Summarization with Transformer Language Models

We present a method to produce abstractive summaries of long documents that exceed several thousand words via neural abstractive summarization. We perform a simple extractive step before generating a summary, which is then used to condition the transformer language model on relevant information before being tasked with generating a summary. We show that this extractive step significantly improves summarization results. We also show that this approach produces more abstractive summaries compared to prior work that employs a copy mechanism while still achieving higher rouge scores. Note: The abstract above was not written by the authors, it was generated by one of the models presented in this paper.

45 mentions: @Miles_Brundage@hillbig@jonathanfly@sleepinyourhat@hardmaru@icoxfog417@jonathanfly@le_science4all
Date: 2019/09/10 09:47

Referring Tweets

@hillbig いよいよ要約生成の論文の要約が提案手法で書かれる時代が来た。重要文をPointerNetworkで抜き出した後、導入、重要文、要約、本文の順に並べた文書生成を自己注意機構ベース言語モデルで学習。推論時は導入、重要文で条件付して要約を生成する。https://t.co/BzQFcPMZdc
@jonathanfly @Miles_Brundage This abstractive summarization paper abstract is a real roller coaster! https://t.co/l6rmYiWu1F https://t.co/MSKQ796Ulr
@icoxfog417 論文のような長い文書を要約する手法の提案。この論文の要約自体が提案手法で書かれているという小洒落た構成。抽出型要約と事前学習済み言語モデルを組み合わせており、言語モデル学習時に本文=>要約(Abstract)の順に並べることで生成かつ要約を学習させたとしている https://t.co/vCZ2sYGZMA
@le_science4all Note: The abstract above was not written by the authors, it was generated by one of the models presented in this paper. " AIs are now competing with humans at writing abstracts of scientific papers!! 😱😲😳 https://t.co/dF9m15oBSS
@jonathanfly This abstractive summarization paper abstract is a real roller coaster! https://t.co/l6rmYiWu1F https://t.co/a4WQth7SWz https://t.co/4rmgeKcmPG
@NicolasChapados An NLP model that writes its own arXiv paper abstract?! Check out this great abstractive neural document summarization work by Element AI researchers Sandeep Subramanian, Raymond Li, Jonathan Pilault and Christopher Pal: https://t.co/2Evi03v6aM
@Miles_Brundage 🧐 from: https://t.co/GkRWlcU5Lq https://t.co/Ytv0uRAFE4

Related Entries

Read more GitHub - soskek/bert-chainer: Chainer implementation of "BERT: Pre-training of Deep Bidirectional Tr...
7 users, 0 mentions 2018/12/02 18:01
Read more [DL Hacks]BERT: Pre-training of Deep Bidirectional Transformers for L…
4 users, 5 mentions 2018/12/07 04:31
Read more The Annotated Transformer
0 users, 0 mentions 2018/08/27 01:24
Read more [DL輪読会]BERT: Pre-training of Deep Bidirectional Transformers for Lang…
0 users, 0 mentions 2018/10/20 12:15
Read more GitHub - huggingface/pytorch-pretrained-BERT: The Big-&-Extending-Repository-of-Transformers: PyTorc...
1 users, 7 mentions 2019/03/04 21:47