[2005.08445] Many-to-Many Voice Transformer Networkopen searchopen navigation menucontact arXivarXiv Twitter

This paper proposes a voice conversion (VC) method based on a sequence-to-sequence (S2S) learning framework, which makes it possible to simultaneously convert the voice characteristics, pitch contour and duration of input speech. We previously proposed an S2S-based VC method using a transformer network architecture, which we call the "voice transformer network (VTN)". While the original VTN is designed to learn only a mapping of speech feature sequences from one domain into another, we extend it so that it can simultaneously learn mappings among multiple domains using only a single model. This allows the model to fully utilize available training data collected from multiple domains by capturing common latent features that can be shared across different domains. On top of this model, we further propose incorporating a training loss called the "identity mapping loss", to ensure that the input feature sequence will remain unchanged when it already belongs to the target domain. Using this

3 mentions: @KentaroTachiba
Keywords: transformer
Date: 2020/05/19 08:21

Related Entries

Read more Generative Adversarial Networksの基礎と応用
0 users, 1 mentions 2019/02/05 14:18
Read more CVPR2019 survey Domain Adaptation on Semantic Segmentation
0 users, 1 mentions 2019/04/23 11:15
Read more [チュートリアル講演] 音声波形直接生成モデル「ニューラルボコーダ」の比較
0 users, 3 mentions 2019/10/11 08:18
Read more Workshop Machine Learning for Research 2020 … did it work?
0 users, 1 mentions 2020/02/10 09:18
Read more Mcfly: An easy-to-use tool for deep learning for time series classification
0 users, 2 mentions 2020/04/15 06:11