Rtype val:theano.Variable
Return type:theano.Variable

Will flatten the first two dimensions and leave the others as is.

TheanoUtil.class_idx_seq_to_1_of_k(seq, num_classes, dtype='float32')[source]
  • seq (theano.Variable) – ndarray with indices
  • | theano.Variable num_classes (int) – number of classes
  • dtype (str) – eg “float32”
Return type:


:returns ndarray with one added dimension of size num_classes. That is the one-hot-encoding. This function is like theano.tensor.extra_ops.to_one_hot but we can handle multiple dimensions.

TheanoUtil.tiled_eye(n1, n2, dtype='float32')[source]
TheanoUtil.windowed_batch(source, window)[source]
  • source (theano.TensorVariable) – 3d tensor of shape (n_time, n_batch, n_dim)
  • window (int|theano.Variable) – window size

tensor of shape (n_time, n_batch, window * n_dim)

TheanoUtil.delta_batch(source, window)[source]
  • source (theano.TensorVariable) – 3d tensor of shape (n_time, n_batch, n_dim)
  • window (int|theano.Variable) – window size

tensor of shape (n_time, n_batch, window * n_dim)

Similar as numpy.diff. Also called delta. TODO with conv op

TheanoUtil.context_batched(source, window)[source]

same as windowed_batch but with window center at the end of the window :param theano.TensorVariable source: 3d tensor of shape (n_time, n_batch, n_dim) :param int|theano.Variable window: window size :return: tensor of shape (n_time, n_batch, window * n_dim)

TheanoUtil.window_batch_timewise(t, b, w, full_index)[source]
TheanoUtil.slice_for_axis(axis, s)[source]
TheanoUtil.downsample(source, axis, factor, method='average')[source]
TheanoUtil.upsample(source, axis, factor, method='nearest-neighbor', target_axis_len=None)[source]
TheanoUtil.pad(source, axis, target_axis_len, pad_value=None)[source]
TheanoUtil.chunked_time_reverse(source, chunk_size)[source]
  • source – >=1d array (time,…)
  • chunk_size – int

like source

Will not reverse the whole time-dim, but only every time-chunk. E.g. source=[0 1 2 3 4 5 6], chunk_size=3, returns [2 1 0 5 4 3 0]. (Padded with 0, recovers original size.)

class TheanoUtil.GradDiscardOutOfBound(lower_bound, upper_bound)[source]
grad(args, g_outs)[source]
TheanoUtil.grad_discard_out_of_bound(x, lower_bound, upper_bound)[source]
TheanoUtil.gaussian_filter_1d(x, sigma, axis, window_radius=40)[source]

Filter 1d input with a Gaussian using mode nearest. x is expected to be 2D/3D of type (time,batch,…). Adapted via: Original Author:

TheanoUtil.log_sum_exp(x, axis)[source]
TheanoUtil.max_filtered(x, axis, index)[source]
TheanoUtil.log_sum_exp_index(x, axis, index)[source]
TheanoUtil.global_softmax(z, index, mode)[source]
  • z (theano.Variable) – 3D array. time*batch*feature
  • index (theano.Variable) – 2D array, 0 or 1, time*batch
Return type:


:returns 3D array. exp(z) / Z, where Z = sum(exp(z),axis=[0,2]) / z.shape[0].

Parameters:z – numpy.ndarray or Theano Var (eval-able), 2D time*features
TheanoUtil.complex_elemwise_mult(a, b, axis=-1)[source]
TheanoUtil.complex_bound(a, axis=-1)[source]
TheanoUtil.complex_dot(a, b)[source]
TheanoUtil.indices_in_flatten_array(ndim, shape, *args)[source]

We expect that all args can be broadcasted together. So, if we have some array A with ndim&shape as given, A[args] would give us a subtensor. We return the indices so that A[args].flatten() and A.flatten()[indices] are the same.

TheanoUtil.circular_convolution(a, b)[source]
TheanoUtil.unroll_scan(fn, sequences=(), outputs_info=(), non_sequences=(), n_steps=None, go_backwards=False)[source]

Helper function to unroll for loops. Can be used to unroll theano.scan. The parameter names are identical to theano.scan, please refer to here for more information.

Note that this function does not support the truncate_gradient setting from theano.scan.

Code adapted from Thank you!


fn : function

Function that defines calculations at each step.

sequences : TensorVariable or list of TensorVariables

List of TensorVariable with sequence data. The function iterates over the first dimension of each TensorVariable.

outputs_info : list of TensorVariables

List of tensors specifying the initial values for each recurrent value.

non_sequences: list of TensorVariables

List of theano.shared variables that are used in the step function.

n_steps: int

Number of steps to unroll.

go_backwards: bool

If true the recursion starts at sequences[-1] and iterates backwards.


Tuple of the form (outputs, updates).

outputs is a list of TensorVariables. Each element in the list gives the recurrent

values at each time step.

updates is an empty dict for now.

class TheanoUtil.Contiguous[source]
grad(inputs, output_grads)[source]
class TheanoUtil.DumpOp(filename, container=None, with_grad=True, parent=None, step=1)[source]
view_map = {0: [0]}[source]
perform(node, inputs, output_storage)[source]
grad(inputs, output_grads)[source]
TheanoUtil.layer_normalization(x, bias=None, scale=None, eps=1e-05)[source]

Layer Normalization, x is mean and variance normalized along its feature dimension. After that, we allow a bias and a rescale. This is supposed to be trainable. :param x: 3d tensor (time,batch,dim) (or any ndim, last dim is expected to be dim) :param bias: 1d tensor (dim) or None :param scale: 1d tensor (dim) or None

TheanoUtil.print_to_file(filename, x, argmax=None, sum=None, shape=False)[source]
Parameters:x – shape (T,D)

:returns cosine similarity matrix, shape (T,T), zeroed at diagonal and upper triangle