[1906.01083] MelNet: A Generative Model for Audio in the Frequency Domain

Capturing high-level structure in audio waveforms is challenging because a single second of audio spans tens of thousands of timesteps. While long-range dependencies are difficult to model directly in the time domain, we show that they can be more tractably modelled in two-dimensional time-frequency representations such as spectrograms. By leveraging this representational advantage, in conjunction with a highly expressive probabilistic model and a multiscale generation procedure, we design a model capable of generating high-fidelity audio samples which capture structure at timescales that time-domain models have yet to achieve. We apply our model to a variety of audio generation tasks, including unconditional speech generation, music generation, and text-to-speech synthesis---showing improvements over previous approaches in both density estimates and human judgments.

6 mentions: @kastnerkyle@kcimc@o_ob@carlcarrie@cynicalsecurity@galipeau
Date: 2019/06/05 06:48

Referring Tweets

@kastnerkyle Interested in a powerful new audio model for conditional and unconditional music, single, and multi-speaker TTS on in-the-wild data? Check out MelNet: https://t.co/ThqjfZJAO8 Blog: https://t.co/1Yqi6Yu7C6 More samples: https://t.co/Ve3hDZs96d Really incredible results! https://t.co/ZhTRSvWRA7
@kcimc MelNet looks really promising for unconditional audio generation https://t.co/7RrbpQ5i8L https://t.co/PgengnpfAs
@o_ob “MelNet” 周波数領域における音声の生成モデル 周波数成分をモデル化することを目的 https://t.co/nqBmLDJd09 Webもわかりやすくていいつくり。 https://t.co/fBGyNGrNBm #ボイチェン研究