GitHub - Maluuba/nlg-eval: Evaluation code for various unsupervised automated metrics for Natural Language Generation.

GitHub - Maluuba/nlg-eval: Evaluation code for various unsupervised automated metrics for Natural Language Generation.

Evaluation code for various unsupervised automated metrics for Natural Language Generation. - Maluuba/nlg-eval

1 mentions: @omarsar0
Date: 2020/07/20 12:52

Referring Tweets

@omarsar0 nlg-eval contains evaluation code for different metrics used in natural language generation. This looks like a nice resource for quickly exploring and getting a better understanding of these metrics. t.co/IHVmXmXXl6 t.co/VpV6zb9r3d

Related Entries

Read more GitHub - ketranm/neuralHMM: code for unsupervised learning Neural Hidden Markov Models paper
2 users, 1 mentions 2020/03/25 15:51
Read more Probabilistic Models of Cognition - 2nd Edition
12 users, 1 mentions 2020/06/10 11:21
Read more Google AI Blog: Extracting Structured Data from Templatic Documents
2 users, 12 mentions 2020/06/12 20:21
Read more 広告クリエイティブ分野にも進む機械学習の浸透
6 users, 1 mentions 2020/07/09 02:21
Read more GitHub - floodsung/Deep-Learning-Papers-Reading-Roadmap: Deep Learning papers reading roadmap for an...
38 users, 17 mentions 2020/07/21 11:20