Go to file
rafaelvalle 8128e0e98f using waveglow, pillow, text changes 2018-11-26 16:42:07 -08:00
filelists changing structure for better organization 2018-05-03 17:14:45 -07:00
text adding changes to text 2018-11-26 16:41:21 -08:00
waveglow@4b1001fa33 cleanup, new model and waveglow 2018-11-26 16:37:44 -08:00
.gitmodules cleanup, new model and waveglow 2018-11-26 16:37:44 -08:00
Dockerfile Dockerfile: adding jupyter to dockerfile 2018-06-14 10:30:01 -07:00
LICENSE Update license such that it appears on repo fron tpage 2018-05-03 23:18:34 -07:00
README.md README.md: updating readme to explicitly mention that mel representation of WaveNet and Tacotron2 must be the same 2018-06-14 11:25:42 -07:00
audio_processing.py adding python files 2018-05-03 15:16:57 -07:00
data_utils.py cleanup, new model and waveglow 2018-11-26 16:37:44 -08:00
demo.wav adding demo.wav file 2018-06-04 16:46:36 -07:00
distributed.py cleanup, new model and waveglow 2018-11-26 16:37:44 -08:00
fp16_optimizer.py adding python files 2018-05-03 15:16:57 -07:00
hparams.py cleanup, new model and waveglow 2018-11-26 16:37:44 -08:00
inference.ipynb using waveglow, pillow, text changes 2018-11-26 16:42:07 -08:00
layers.py cleanup, new model and waveglow 2018-11-26 16:37:44 -08:00
logger.py adding python files 2018-05-03 15:16:57 -07:00
loss_function.py adding python files 2018-05-03 15:16:57 -07:00
loss_scaler.py loss_scaler.py: patching loss scaler for compatibility with current pytorch 2018-05-15 09:50:08 -07:00
model.py cleanup, new model and waveglow 2018-11-26 16:37:44 -08:00
multiproc.py adding python files 2018-05-03 15:16:57 -07:00
plotting_utils.py adding python files 2018-05-03 15:16:57 -07:00
requirements.txt using waveglow, pillow, text changes 2018-11-26 16:42:07 -08:00
stft.py cleanup, new model and waveglow 2018-11-26 16:37:44 -08:00
tensorboard.png tensorboard.png: adding tensorboard image 2018-05-03 15:11:54 -07:00
train.py cleanup, new model and waveglow 2018-11-26 16:37:44 -08:00
utils.py cleanup, new model and waveglow 2018-11-26 16:37:44 -08:00

README.md

Tacotron 2 (without wavenet)

Tacotron 2 PyTorch implementation of Natural TTS Synthesis By Conditioning Wavenet On Mel Spectrogram Predictions.

This implementation includes distributed and fp16 support and uses the LJSpeech dataset.

Distributed and FP16 support relies on work by Christian Sarofeen and NVIDIA's Apex Library.

Alignment, Predicted Mel Spectrogram, Target Mel Spectrogram

Download demo audio trained on LJS and using Ryuchi Yamamoto's pre-trained Mixture of Logistics wavenet
"Scientists at the CERN laboratory say they have discovered a new particle."

Pre-requisites

  1. NVIDIA GPU + CUDA cuDNN

Setup

  1. Download and extract the LJ Speech dataset
  2. Clone this repo: git clone https://github.com/NVIDIA/tacotron2.git
  3. CD into this repo: cd tacotron2
  4. Update .wav paths: sed -i -- 's,DUMMY,ljs_dataset_folder/wavs,g' filelists/*.txt
    • Alternatively, set load_mel_from_disk=True in hparams.py and update mel-spectrogram paths
  5. Install pytorch 0.4
  6. Install python requirements or build docker image
    • Install python requirements: pip install -r requirements.txt
    • OR
    • Build docker image: docker build --tag tacotron2 .

Training

  1. python train.py --output_directory=outdir --log_directory=logdir
  2. (OPTIONAL) tensorboard --logdir=outdir/logdir

Multi-GPU (distributed) and FP16 Training

  1. python -m multiproc train.py --output_directory=outdir --log_directory=logdir --hparams=distributed_run=True,fp16_run=True

Inference

When performing Mel-Spectrogram to Audio synthesis with a WaveNet model, make sure Tacotron 2 and WaveNet were trained on the same mel-spectrogram representation. Follow these steps to use a a simple inference pipeline using griffin-lim:

  1. jupyter notebook --ip=127.0.0.1 --port=31337
  2. load inference.ipynb

nv-wavenet: Faster than real-time wavenet inference

Acknowledgements

This implementation uses code from the following repos: Keith Ito, Prem Seetharaman as described in our code.

We are inspired by Ryuchi Yamamoto's Tacotron PyTorch implementation.

We are thankful to the Tacotron 2 paper authors, specially Jonathan Shen, Yuxuan Wang and Zongheng Yang.