Commit Graph

130 Commits (0682eddfdcc6dececf1236d770ba52cf79c43ed9)

Author SHA1 Message Date
Malar Kannan 0682eddfdc 1. add griffin lim support
2. add denoiser
3. add support to handle old and new waveglow models
2019-09-26 10:53:09 +05:30
Malar Kannan a10a6d517e packaged taco2 2019-09-21 01:19:30 +05:30
Malar Kannan 1f83e8687c implemented a grpc server for entity api 2019-07-16 16:01:08 +05:30
Malar Kannan 755362518a update comments 2019-07-16 16:01:08 +05:30
Malar Kannan 9413fb73b9 move codestyle to black 2019-07-16 16:01:08 +05:30
Malar Kannan 108ce2b079 cleanup unused code and fix packaging issues 2019-07-16 16:01:00 +05:30
Malar Kannan 5f75aa0a0d integrate tacotron2/waveglow based tts server 2019-07-16 16:00:50 +05:30
Malar Kannan 4be2475cc1 load waveglow model from statedict 2019-07-03 14:05:10 +05:30
Malar Kannan 08ad9ce16e 1. clean-up redundant code
2. remove ffmpeg dependency using librosa api
3. remove waveglow submodule
2019-07-02 16:16:44 +05:30
Malar Kannan 505af768a0 1. create a class for the tts api
2. implement a grpc server for tts
3. add a demo client for the grpc service
4. update gitignore and requriements
5. cleanup
2019-07-01 14:47:55 +05:30
Malar Kannan ccd8ab42e7 add a function to cache all responses from corpus.txt 2019-06-28 18:44:24 +05:30
Malar Kannan 102c424eac add WORKFLOW.md and update final.py 2019-06-28 14:24:36 +05:30
Malar Kannan 81d15abb4d return audio stream from speech 2019-06-28 10:25:37 +05:30
Malar Kannan acf1b444a9 clean-up for cpu-only inference 2019-06-28 09:46:46 +05:30
Rafael Valle 131c1465b4
Merge pull request #188 from jybaek/fixed-waveglow-link
Fixed link to download waveglow from inference.py
2019-04-22 16:49:14 -07:00
jybaek d5321ff0ca Fixed link to download waveglow from inference.py 2019-04-19 15:21:09 +09:00
rafaelvalle c76ac3b211 README.md: clarifying terminology 2019-04-03 14:59:20 -07:00
rafaelvalle e3d2d0a5ef README.md: using proper nomenclature 2019-04-03 14:56:06 -07:00
rafaelvalle a992aea070 README.md: updating terminology 2019-04-03 14:54:45 -07:00
rafaelvalle eb2a171690 Merge branch 'master' of https://github.com/NVIDIA/tacotron2 2019-04-03 13:51:59 -08:00
rafaelvalle 821bfeba5d README.md: adding instructions to install apex 2019-04-03 13:51:36 -08:00
rafaelvalle d6670c8ed7 Dockerfile: updating to use latest pytorch and apex 2019-04-03 13:51:22 -08:00
rafaelvalle 0274619e45 train.py: using amp for mixed precision training 2019-04-03 13:42:00 -08:00
rafaelvalle bb20035586 inference.ipynb: adding fp16 inference 2019-04-03 13:41:11 -08:00
rafaelvalle 1480f82908 model.py: renaming variables, removing dropout from lstm cell state, removing conversions now handled by amp 2019-04-03 13:36:35 -08:00
rafaelvalle 087c86755f logger.py: using new pytorch api 2019-04-03 13:35:04 -08:00
Rafael Valle ece7d3f568 train.py: changing dataloder params given sampler 2019-03-19 13:47:01 -07:00
rafaelvalle f37998c59d train.py: shuffling at every epoch 2019-03-15 17:49:27 -07:00
rafaelvalle bff304f432 README.md: adding explanation on training from pre-trained model 2019-03-15 17:38:40 -07:00
rafaelvalle 3869781877 train.py: adding routine to warm start and ignore layers, e.g. embedding.weight 2019-03-15 17:34:27 -07:00
rafaelvalle bb67613493 hparams.py: adding ignore_layers argument to ignore text embedding layers when warm_starting 2019-03-15 17:28:50 -07:00
rafaelvalle af1f71a975 inference.ipynb: adding code to remove waveglows bias 2019-03-15 16:54:54 -07:00
rafaelvalle fc0d34cfce stft.py: moving window_sum to cuda if magnitude is cuda 2019-03-15 14:36:56 -07:00
Rafael Valle f2c94d94fd
Merge pull request #136 from GrzegorzKarchNV/master
Fixing concatenation error for fp16 distributed training
2019-02-01 12:10:42 -08:00
gkarch df4a466af2 Fixing concatenation error for fp16 ditributed training 2019-02-01 09:55:59 +01:00
rafaelvalle 825ffa47d1 inference.ipynb: reverting fp16 inference for now 2018-12-08 21:26:01 -08:00
rafaelvalle 4d7b04120a inference.ipynb: changing waverglow inference fo fp16 2018-12-05 22:14:35 -08:00
rafaelvalle 6e430556bd train.py: val logger on gpu 0 only 2018-11-27 22:03:11 -08:00
rafaelvalle 3973b3e495 hparams.py: distributed using tcp 2018-11-27 22:02:43 -08:00
rafaelvalle 52a30bb7b6 distributed.py: replacing to avoid distributed error 2018-11-27 21:01:26 -08:00
rafaelvalle 0ad65cc053 train.py: renaming variable to n_gpus 2018-11-27 21:00:05 -08:00
rafaelvalle 8300844fa7 hparams.py: removing 22khz 2018-11-27 20:56:52 -08:00
rafaelvalle f06063f746 train.py: renaming function, removing dataparallel 2018-11-27 18:04:12 -08:00
rafaelvalle 3045ba125b inference.ipynb: cleanup 2018-11-27 12:04:36 -08:00
rafaelvalle 4c4aca3662 README.md: layout 2018-11-27 11:59:05 -08:00
rafaelvalle 05dd8f91d2 README.md: adding submodule init to README 2018-11-27 11:55:40 -08:00
rafaelvalle 5d66c3deab adding waveglow submodule 2018-11-27 11:53:20 -08:00
Rafael Valle f02704f338
Merge pull request #96 from NVIDIA/clean_slate
Clean slate
2018-11-27 08:06:00 -08:00
rafaelvalle ba8cf36198 requirements.txt: removing pytorch 0.4 from requirements. upgrading to 1.0 2018-11-27 08:04:21 -08:00
rafaelvalle b5e0a93946 inference.ipynb: updating inference file with relative paths 2018-11-27 08:04:04 -08:00