Catching Unicorns with GLTR

try our demo By Hendrik Strobelt and Sebastian Gehrmann -- reviewed by Alexander Rush A collaboration of MIT-IBM Watson AI lab and HarvardNLP We introduce GLTR to inspect the visual footprint of automatically generated tex. It enables a forensic analysis of how likely an automatic system generated a text. Check out the live DEMO In recent years, the natural language processing community has seen the development of increasingly larger and larger language models. A language model is a machine learning model that is trained to predict the next word given an input context. As such, a model can generate text by generating one word at a time. These predictions can even, to some extent, be constrained by human-provided input to control what the model writes about. Due to their modeling power, large language models have the potential to generate textual output that is indistinguishable from human-written text to a non-expert reader. Language models achieve this with incredibly accura...

4 mentions: @sebgehr@sebgehr@julien_c@sebgehr
Date: 2019/06/12 18:48

Referring Tweets

@sebgehr Our ACL Demo paper for is finally on Arxiv! We not only found that models can defend against themselves, but that our interface can help humans spot generated text. Check it out here: With @MITIBMLab @hen_str @srush_nlp
@sebgehr @ThomasScialom @NeuralGen @tatsu_hashimoto Yep, I'll be there to present as a demo (and for Blackboxnlp)
@julien_c @ClementDelangue @sararahmcb @betaworksVC @Borthwick You can paste it into and see if machines are better than humans at detecting it