- Provides a RETURNN wrapper around warp-transducer:
- Other references:
https://github.com/awni/transducer (reference implementation) https://github.com/1ytic/warp-rnnt (CUDA-Warp RNN-Transducer, with pytorch binding) https://github.com/ZhengkunTian/rnn-transducer (pytorch implementation)
Importing this module immediately compiles the library and TF module.
Checks if the git submodule is checkout out.
- Return type:
Initializes and compiles the library. Caches the TF module.
verbose (bool) –
- returnn.extern.HawkAaronWarpTransducer.rnnt_loss(acts, labels, input_lengths, label_lengths, blank_label=0)#
Computes the RNNT loss between a sequence of activations and a ground truth labeling. Args:
- acts: A 4-D Tensor of floats. The dimensions
should be (B, T, U, V), where B is the minibatch index, T is the time index, U is the prediction network sequence length, and V indexes over activations for each symbol in the alphabet.
- labels: A 2-D Tensor of ints, a padded label sequences to make sure
labels for the minibatch are same length.
- input_lengths: A 1-D Tensor of ints, the number of time steps
for each sequence in the minibatch.
- label_lengths: A 1-D Tensor of ints, the length of each label
for each example in the minibatch.
- blank_label: int, the label value/index that the RNNT
calculation should use as the blank label
1-D float Tensor, the cost of each example in the minibatch (as negative log probabilities).
This class performs the softmax operation internally.
The label reserved for the blank symbol should be label 0.