mirror of https://github.com/malarinv/tacotron2
README.md: updating readme to explicitly mention that mel representation of WaveNet and Tacotron2 must be the same
parent
c67005f1be
commit
7eb045206c
|
|
@ -38,9 +38,13 @@ wavenet](https://github.com/r9y9/wavenet_vocoder/)
|
||||||
1. `python -m multiproc train.py --output_directory=outdir --log_directory=logdir --hparams=distributed_run=True,fp16_run=True`
|
1. `python -m multiproc train.py --output_directory=outdir --log_directory=logdir --hparams=distributed_run=True,fp16_run=True`
|
||||||
|
|
||||||
## Inference
|
## Inference
|
||||||
|
When performing Mel-Spectrogram to Audio synthesis with a WaveNet model, make sure Tacotron 2 and WaveNet were trained on the same mel-spectrogram representation. Follow these steps to use a a simple inference pipeline using griffin-lim:
|
||||||
|
|
||||||
1. `jupyter notebook --ip=127.0.0.1 --port=31337`
|
1. `jupyter notebook --ip=127.0.0.1 --port=31337`
|
||||||
2. load inference.ipynb
|
2. load inference.ipynb
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
## Related repos
|
## Related repos
|
||||||
[nv-wavenet](https://github.com/NVIDIA/nv-wavenet/): Faster than real-time
|
[nv-wavenet](https://github.com/NVIDIA/nv-wavenet/): Faster than real-time
|
||||||
wavenet inference
|
wavenet inference
|
||||||
|
|
|
||||||
Loading…
Reference in New Issue