Catching Unicorns with GLTR

try our demo By Hendrik Strobelt and Sebastian Gehrmann -- reviewed by Alexander Rush A collaboration of MIT-IBM Watson AI lab and HarvardNLP We introduce GLTR to inspect the visual footprint of automatically generated tex. It enables a forensic analysis of how likely an automatic system generated a text. Check out the live DEMO In recent years, the natural language processing community has seen the development of increasingly larger and larger language models. A language model is a mach

4 mentions: @sebgehr@sebgehr@julien_c@sebgehr
Date: 2019/06/12 18:48

Referring Tweets

@sebgehr Our ACL Demo paper for is finally on Arxiv! We not only found that models can defend against themselves, but that our interface can help humans spot generated text. Check it out here: With @MITIBMLab @hen_str @srush_nlp
@julien_c @ClementDelangue @sararahmcb @betaworksVC @Borthwick You can paste it into and see if machines are better than humans at detecting it
@sebgehr Shameless plug: this is exactly why our tool works! The samples are rated highly by humans, and therefore we can exploit the underdiversity to detect generated text!

Related Entries

Read more Deep Learning for NLP Best Practices
97 users, 0 mentions 2018/04/22 03:40
Read more 100 Must-Read NLP Papers | This is a list of 100 important natural language processing (NLP) papers ...
0 users, 0 mentions 2018/04/22 03:40
Read more COTA: Improving Uber Customer Care with NLP & Machine Learning
0 users, 0 mentions 2018/06/11 19:29
Read more Survey of Scientific Publication Analysis by NLP and CV
1 users, 4 mentions 2019/08/15 08:16
Read more ゼロから始める深層強化学習(NLP2018講演資料)/ Introduction of Deep Reinforcement Learni…
0 users, 0 mentions 2018/07/05 03:24