[2102.11387] Exploiting Multimodal Reinforcement Learning for Simultaneous Machine Translation

This paper addresses the problem of simultaneous machine translation (SiMT) by exploring two main concepts: (a) adaptive policies to learn a good trade-off between high translation quality and low latency; and (b) visual information to support this process by providing additional (visual) contextual information which may be available before the textual input is produced. For that, we propose a multimodal approach to simultaneous machine translation using reinforcement learning, with strategies to integrate visual and textual information in both the agent and the environment. We provide an exploration on how different types of visual information and integration strategies affect the quality and latency of simultaneous translation models, and demonstrate that visual cues lead to higher quality while keeping the latency low.

Date: 2021/02/24 03:51

Related Entries

Read more ニューラルネットワークのPruningの最新動向について - Ridge Institute's Blog
20 users, 8 mentions 2021/02/24 02:51
Read more GitHub - vaaaaanquish/Awesome-Rust-MachineLearning: Awesome Rust Machine Learning crate list
0 users, 0 mentions 2021/02/25 12:53
Read more 探索的データ分析(EDA)を効率化するAutoEDAライブラリの紹介|NEO CAREER Data Analytics Blog|note
0 users, 3 mentions 2021/02/26 11:22
Read more CV分野での最近の脱○○系3選
0 users, 3 mentions 2021/02/28 13:01
Read more Preferred Networksという最高の会社に新卒入社して2年で退職しました - k1itoの論文メモ
13 users, 4 mentions 2021/03/04 13:00