Shiry Ginosar* Amir Bar* Gefen Kohavi Caroline Chan Andrew Owens Jitendra Malik In CVPR 2019 [Code] [Data] Speech audio-to-gesture translation. From the bottom upward: the input audio, predicted arm and hand motion, and synthesized video frames. Abstract Human speech is often accompanied by hand and arm gestures. Given audio speech input, we generate plausible gestures to go along with the sound. Specifically, we perform cross-modal translation from "in-the-wild" monologue speech of

6 mentions: @_amirbar@luiscosio@soycurd1
Date: 2019/06/13 09:48

Referring Tweets

@_amirbar (2/2) We also release our full dataset and will make the code available. Joint work with with @shiryginosar, Gefen Kohavi, Caroline Chan, Andrew Ownes and Jitendra Malik. For more details see project page: https://t.co/oH0mca7B3x. @berkeley_ai @ZebraMedVision

Bookmark Comments

Related Entries

Read more 教師なし画像特徴表現学習の動向 {Un, Self} supervised representation learning (CVPR 2…
0 users, 0 mentions 2018/10/17 06:23
Read more CVPR 2019 速報
20 users, 28 mentions 2019/06/21 03:47
Read more CVPR 2017 速報
0 users, 0 mentions 2018/04/22 03:41
Read more [MIRU 2018 招待講演] Neural 3D Mesh Renderer (CVPR 2018)
0 users, 0 mentions 2018/08/06 13:00
Read more CVPR 2018 速報
25 users, 2 mentions 2019/01/23 02:15