mirror of https://github.com/malarinv/tacotron2
README.md: adding instructions to install apex
parent
d6670c8ed7
commit
821bfeba5d
|
|
@ -6,8 +6,7 @@ Wavenet On Mel Spectrogram Predictions](https://arxiv.org/pdf/1712.05884.pdf).
|
|||
This implementation includes **distributed** and **fp16** support
|
||||
and uses the [LJSpeech dataset](https://keithito.com/LJ-Speech-Dataset/).
|
||||
|
||||
Distributed and FP16 support relies on work by Christian Sarofeen and NVIDIA's
|
||||
[Apex Library](https://github.com/nvidia/apex).
|
||||
Distributed and FP16 support uses NVIDIA's [Apex] and [AMP].
|
||||
|
||||
Visit our [website] for audio samples using our published [Tacotron 2] and
|
||||
[WaveGlow] models.
|
||||
|
|
@ -26,7 +25,8 @@ Visit our [website] for audio samples using our published [Tacotron 2] and
|
|||
5. Update .wav paths: `sed -i -- 's,DUMMY,ljs_dataset_folder/wavs,g' filelists/*.txt`
|
||||
- Alternatively, set `load_mel_from_disk=True` in `hparams.py` and update mel-spectrogram paths
|
||||
6. Install [PyTorch 1.0]
|
||||
7. Install python requirements or build docker image
|
||||
7. Install [Apex]
|
||||
8. Install python requirements or build docker image
|
||||
- Install python requirements: `pip install -r requirements.txt`
|
||||
|
||||
## Training
|
||||
|
|
@ -77,3 +77,5 @@ Wang and Zongheng Yang.
|
|||
[pytorch 1.0]: https://github.com/pytorch/pytorch#installation
|
||||
[website]: https://nv-adlr.github.io/WaveGlow
|
||||
[ignored]: https://github.com/NVIDIA/tacotron2/blob/master/hparams.py#L22
|
||||
[Apex]: https://github.com/nvidia/apex
|
||||
[AMP]: https://github.com/NVIDIA/apex/tree/master/apex/amp
|
||||
Loading…
Reference in New Issue