[1907.10568] Investigating Evaluation of Open-Domain Dialogue Systems With Human Generated Multiple References

The aim of this paper is to mitigate the shortcomings of automatic evaluation of open-domain dialog systems through multi-reference evaluation. Existing metrics have been shown to correlate poorly with human judgement, particularly in open-domain dialog. One alternative is to collect human annotations for evaluation, which can be expensive and time consuming. To demonstrate the effectiveness of multi-reference evaluation, we augment the test set of DailyDialog with multiple references. A series of experiments show that the use of multiple references results in improved correlation between several automatic metrics and human judgement for both the quality and the diversity of system output.

1 mentions: @tkym1220
Date: 2019/08/09 11:17

Referring Tweets

@tkym1220 Investigating Evaluation of Open-Domain Dialogue Systems With Human Generated Multiple References (SIGDIAL2019) t.co/ILIeTnQspC 雑談対話システムにおいて,マルチリファレンスによる自動評価は(シングルに比べ)人手評価との相関が高いことを示した.作成した評価セットは公開済み

Related Entries

Read more 自然言語処理ナイト - connpass
0 users, 28 mentions 2020/06/15 12:58
Read more GitHub - alexa/alexa-with-dstc9-track1-dataset: DSTC9 Track 1 - Beyond Domain APIs: Task-oriented Co...
0 users, 1 mentions 2020/06/16 02:21
Read more NLP4MusA
0 users, 1 mentions 2020/06/25 14:21
Read more [1904.02101] The Landscape of R Packages for Automated Exploratory Data Analysisopen searchopen navi...
0 users, 1 mentions 2020/06/27 06:52
Read more [2008.02964] Which Kind Is Better in Open-domain Multi-turn Dialog,Hierarchical or Non-hierarchical ...
0 users, 1 mentions 2020/09/10 17:21