1
0
mirror of https://github.com/malarinv/tacotron2 synced 2026-03-08 09:42:34 +00:00

README.md: updating readme to explicitly mention that mel representation of WaveNet and Tacotron2 must be the same

This commit is contained in:
rafaelvalle
2018-06-14 11:25:42 -07:00
parent c67005f1be
commit 7eb045206c

View File

@@ -38,9 +38,13 @@ wavenet](https://github.com/r9y9/wavenet_vocoder/)
1. `python -m multiproc train.py --output_directory=outdir --log_directory=logdir --hparams=distributed_run=True,fp16_run=True` 1. `python -m multiproc train.py --output_directory=outdir --log_directory=logdir --hparams=distributed_run=True,fp16_run=True`
## Inference ## Inference
When performing Mel-Spectrogram to Audio synthesis with a WaveNet model, make sure Tacotron 2 and WaveNet were trained on the same mel-spectrogram representation. Follow these steps to use a a simple inference pipeline using griffin-lim:
1. `jupyter notebook --ip=127.0.0.1 --port=31337` 1. `jupyter notebook --ip=127.0.0.1 --port=31337`
2. load inference.ipynb 2. load inference.ipynb
## Related repos ## Related repos
[nv-wavenet](https://github.com/NVIDIA/nv-wavenet/): Faster than real-time [nv-wavenet](https://github.com/NVIDIA/nv-wavenet/): Faster than real-time
wavenet inference wavenet inference