1
0
mirror of https://github.com/malarinv/plume-asr.git synced 2026-03-07 20:02:34 +00:00
Malar db51553320 1. include additional ui dependencies
2. set sample width to 1 for wav2vec2 training data export from jasper
3. add 'audio_seg' key to asr_manifest_reader
4. add alpha rules
5. bugfixes and tests
2021-08-16 18:02:26 +05:30
2020-08-09 19:16:35 +05:30
2020-03-16 14:21:51 +05:30
2021-02-23 19:43:33 +05:30
2021-08-16 18:02:26 +05:30

Plume ASR

image

Generates text from audio containing speech


Table of Contents

Prerequisites

# apt install libsndfile-dev ffmpeg

Features

Installation

To install the packages and its dependencies run.

python setup.py install

or with pip

pip install .[all]

The installation should work on Python 3.6 or newer. Untested on Python 2.7

Usage

Library

Jasper

from plume.models.jasper_nemo.asr import JasperASR
asr_model = JasperASR("/path/to/model_config_yaml","/path/to/encoder_checkpoint","/path/to/decoder_checkpoint") # Loads the models
TEXT = asr_model.transcribe(wav_data) # Returns the text spoken in the wav

Wav2Vec2

from plume.models.wav2vec2.asr import Wav2Vec2ASR
asr_model = Wav2Vec2ASR("/path/to/ctc_checkpoint","/path/to/w2v_checkpoint","/path/to/target_dictionary") # Loads the models
TEXT = asr_model.transcribe(wav_data) # Returns the text spoken in the wav

Command Line

$ plume

Pretrained Models

Jasper https://ngc.nvidia.com/catalog/models/nvidia:multidataset_jasper10x5dr/files?version=3 Wav2Vec2 https://github.com/pytorch/fairseq/blob/master/examples/wav2vec/README.md

Description
No description provided
Readme MIT 424 KiB
Languages
Jupyter Notebook 54.8%
Python 43.3%
HTML 1.9%