# RecurrentTransform¶

class RecurrentTransform.RecurrentTransformBase(force_gpu=False, layer=None, for_custom=False)[source]
Parameters: for_custom (bool) – When used with LSTMC + LSTMCustomOp, there are two instances of this class: One via the network initialization as part of the layer (for_custom == False) and another one via CustomLSTMFunctions (for_custom == True). The symbolic vars will look different. See self.create_vars_for_custom().
name = None[source]
copy_for_custom(force_gpu=True)[source]

:returns a new instance of this class for LSTMCustomOp

create_vars_for_custom()[source]

Called via CustomLSTMFunctions.

init_vars()[source]
create_vars()[source]

Called for regular theano.scan().

add_param(v, name=None, **kwargs)[source]
add_input(v, name=None)[source]
add_state_var(initial_value, name=None)[source]
add_var(v, name=None)[source]
get_sorted_non_sequence_inputs()[source]
get_sorted_custom_vars()[source]
get_sorted_state_vars()[source]
get_sorted_state_vars_initial()[source]
set_sorted_state_vars(state_vars)[source]
get_state_vars_seq(state_var)[source]
step(y_p)[source]
Parameters: y_p (theano.Variable) – output of last time-frame. 2d (batch,dim) z_re, updates (theano.Variable, dict[theano.Variable, theano.Variable])
cost()[source]
Return type: theano.Variable | None
class RecurrentTransform.AttentionTest(force_gpu=False, layer=None, for_custom=False)[source]
Parameters: for_custom (bool) – When used with LSTMC + LSTMCustomOp, there are two instances of this class: One via the network initialization as part of the layer (for_custom == False) and another one via CustomLSTMFunctions (for_custom == True). The symbolic vars will look different. See self.create_vars_for_custom().
name = 'test'[source]
create_vars()[source]
step(y_p)[source]
class RecurrentTransform.DummyTransform(force_gpu=False, layer=None, for_custom=False)[source]
Parameters: for_custom (bool) – When used with LSTMC + LSTMCustomOp, there are two instances of this class: One via the network initialization as part of the layer (for_custom == False) and another one via CustomLSTMFunctions (for_custom == True). The symbolic vars will look different. See self.create_vars_for_custom().
name = 'none'[source]
step(y_p)[source]
class RecurrentTransform.DynamicTransform(force_gpu=False, layer=None, for_custom=False)[source]
Parameters: for_custom (bool) – When used with LSTMC + LSTMCustomOp, there are two instances of this class: One via the network initialization as part of the layer (for_custom == False) and another one via CustomLSTMFunctions (for_custom == True). The symbolic vars will look different. See self.create_vars_for_custom().
name = 'rnn'[source]
create_vars()[source]
step(y_p)[source]
class RecurrentTransform.BatchNormTransform(force_gpu=False, layer=None, for_custom=False)[source]
Parameters: for_custom (bool) – When used with LSTMC + LSTMCustomOp, there are two instances of this class: One via the network initialization as part of the layer (for_custom == False) and another one via CustomLSTMFunctions (for_custom == True). The symbolic vars will look different. See self.create_vars_for_custom().
name = 'batch_norm'[source]
create_vars()[source]
batch_norm(h, use_shift=True, use_std=True, use_sample=0.0)[source]
step(y_p)[source]
class RecurrentTransform.LM(force_gpu=False, layer=None, for_custom=False)[source]
Parameters: for_custom (bool) – When used with LSTMC + LSTMCustomOp, there are two instances of this class: One via the network initialization as part of the layer (for_custom == False) and another one via CustomLSTMFunctions (for_custom == True). The symbolic vars will look different. See self.create_vars_for_custom().
name = 'lm'[source]
create_vars()[source]
step(y_p)[source]
class RecurrentTransform.AttentionBase(force_gpu=False, layer=None, for_custom=False)[source]
Parameters: for_custom (bool) – When used with LSTMC + LSTMCustomOp, there are two instances of this class: One via the network initialization as part of the layer (for_custom == False) and another one via CustomLSTMFunctions (for_custom == True). The symbolic vars will look different. See self.create_vars_for_custom().
base = None[source]
name = 'attention_base'[source]
attrs[source]
create_vars()[source]
default_updates()[source]
step(y_p)[source]
distance(C, H)[source]
beam(X, beam_idx=None)[source]
align(w_i, Q)[source]
softmax(D, I)[source]
class RecurrentTransform.AttentionList(force_gpu=False, layer=None, for_custom=False)[source]

attention over list of bases

Parameters: for_custom (bool) – When used with LSTMC + LSTMCustomOp, there are two instances of this class: One via the network initialization as part of the layer (for_custom == False) and another one via CustomLSTMFunctions (for_custom == True). The symbolic vars will look different. See self.create_vars_for_custom().
name = 'attention_list'[source]
init(i)[source]
create_bias(n, name, i=-1)[source]
create_weights(n, m, name, i=-1)[source]
create_vars()[source]
item(name, i)[source]
get(y_p, i, g)[source]
attend(y_p)[source]
cost()[source]
class RecurrentTransform.AttentionAlign(force_gpu=False, layer=None, for_custom=False)[source]

alignment controlled attention

Parameters: for_custom (bool) – When used with LSTMC + LSTMCustomOp, there are two instances of this class: One via the network initialization as part of the layer (for_custom == False) and another one via CustomLSTMFunctions (for_custom == True). The symbolic vars will look different. See self.create_vars_for_custom().
name = 'attention_align'[source]
create_vars()[source]
attend(y_p)[source]
class RecurrentTransform.AttentionInverted(force_gpu=False, layer=None, for_custom=False)[source]

alignment controlled attention

Parameters: for_custom (bool) – When used with LSTMC + LSTMCustomOp, there are two instances of this class: One via the network initialization as part of the layer (for_custom == False) and another one via CustomLSTMFunctions (for_custom == True). The symbolic vars will look different. See self.create_vars_for_custom().
name = 'attention_inverted'[source]
create_vars()[source]
attend(y_p)[source]
class RecurrentTransform.AttentionSegment(force_gpu=False, layer=None, for_custom=False)[source]

alignment controlled attention over segments

Parameters: for_custom (bool) – When used with LSTMC + LSTMCustomOp, there are two instances of this class: One via the network initialization as part of the layer (for_custom == False) and another one via CustomLSTMFunctions (for_custom == True). The symbolic vars will look different. See self.create_vars_for_custom().
name = 'attention_segment'[source]
create_bias(n, name, i=-1)[source]
create_weights(n, m, name, i=-1)[source]
create_vars()[source]
make_index(inv_att, ind)[source]
calc_temperature(method='epoch', min_dist=None)[source]
attend(y_p)[source]
class RecurrentTransform.AttentionTime(force_gpu=False, layer=None, for_custom=False)[source]

Concatenate time-aligned base element into single list element

Parameters: for_custom (bool) – When used with LSTMC + LSTMCustomOp, there are two instances of this class: One via the network initialization as part of the layer (for_custom == False) and another one via CustomLSTMFunctions (for_custom == True). The symbolic vars will look different. See self.create_vars_for_custom().
name = 'attention_time'[source]
make_base()[source]
create_vars()[source]
default_updates()[source]
class RecurrentTransform.AttentionTree(force_gpu=False, layer=None, for_custom=False)[source]

attention over hierarchy of bases in different time resolutions

Parameters: for_custom (bool) – When used with LSTMC + LSTMCustomOp, there are two instances of this class: One via the network initialization as part of the layer (for_custom == False) and another one via CustomLSTMFunctions (for_custom == True). The symbolic vars will look different. See self.create_vars_for_custom().
name = 'attention_tree'[source]
attend(y_p)[source]
class RecurrentTransform.AttentionBin(force_gpu=False, layer=None, for_custom=False)[source]

pruning of hypotheses in base[0] by attending over versions in time lower resolutions

Parameters: for_custom (bool) – When used with LSTMC + LSTMCustomOp, there are two instances of this class: One via the network initialization as part of the layer (for_custom == False) and another one via CustomLSTMFunctions (for_custom == True). The symbolic vars will look different. See self.create_vars_for_custom().
name = 'attention_bin'[source]
attend(y_p)[source]
class RecurrentTransform.AttentionTimeGauss(force_gpu=False, layer=None, for_custom=False)[source]
Parameters: for_custom (bool) – When used with LSTMC + LSTMCustomOp, there are two instances of this class: One via the network initialization as part of the layer (for_custom == False) and another one via CustomLSTMFunctions (for_custom == True). The symbolic vars will look different. See self.create_vars_for_custom().
name = 'attention_time_gauss'[source]
create_vars()[source]
step(y_p)[source]
cost()[source]
RecurrentTransform.get_dummy_recurrent_transform(recurrent_transform_name, n_out=5, n_batches=2, n_input_t=2, n_input_dim=2)[source]
Return type: RecurrentTransformBase

This function is a useful helper for testing/debugging.