returnn.frontend.reduce

Reduce

returnn.frontend.reduce.reduce(source: Tensor[T], *, mode: str, axis: Dim | Sequence[Dim], use_mask: bool = True) Tensor[T][source]

Reduce the tensor along the given axis

Parameters:
  • source

  • mode – “sum”, “max”, “min”, “mean”, “logsumexp”, “any”, “all”, “argmin”, “argmax”

  • axis

  • use_mask – if True (default), use the time mask (part of dim tag) to ignore padding frames

Returns:

tensor with axis removed

returnn.frontend.reduce.reduce_sum(source: Tensor[T], *, axis: Dim | Sequence[Dim], use_mask: bool = True) Tensor[T][source]

Reduce the tensor along the given axis

Parameters:
  • source

  • axis

  • use_mask – if True (default), use the time mask (part of dim tag) to ignore padding frames

Returns:

tensor with axis removed

returnn.frontend.reduce.reduce_max(source: Tensor[T], *, axis: Dim | Sequence[Dim], use_mask: bool = True) Tensor[T][source]

Reduce the tensor along the given axis

Parameters:
  • source

  • axis

  • use_mask – if True (default), use the time mask (part of dim tag) to ignore padding frames

Returns:

tensor with axis removed

returnn.frontend.reduce.reduce_min(source: Tensor[T], *, axis: Dim | Sequence[Dim], use_mask: bool = True) Tensor[T][source]

Reduce the tensor along the given axis

Parameters:
  • source

  • axis

  • use_mask – if True (default), use the time mask (part of dim tag) to ignore padding frames

Returns:

tensor with axis removed

returnn.frontend.reduce.reduce_mean(source: Tensor[T], *, axis: Dim | Sequence[Dim], use_mask: bool = True) Tensor[T][source]

Reduce the tensor along the given axis

Parameters:
  • source

  • axis

  • use_mask – if True (default), use the time mask (part of dim tag) to ignore padding frames

Returns:

tensor with axis removed

returnn.frontend.reduce.reduce_logsumexp(source: Tensor[T], *, axis: Dim | Sequence[Dim], use_mask: bool = True) Tensor[T][source]

Reduce the tensor along the given axis

Parameters:
  • source

  • axis

  • use_mask – if True (default), use the time mask (part of dim tag) to ignore padding frames

Returns:

tensor with axis removed

returnn.frontend.reduce.reduce_any(source: Tensor[T], *, axis: Dim | Sequence[Dim], use_mask: bool = True) Tensor[T][source]

Reduce the tensor along the given axis

Parameters:
  • source

  • axis

  • use_mask – if True (default), use the time mask (part of dim tag) to ignore padding frames

Returns:

tensor with axis removed

returnn.frontend.reduce.reduce_all(source: Tensor[T], *, axis: Dim | Sequence[Dim], use_mask: bool = True) Tensor[T][source]

Reduce the tensor along the given axis

Parameters:
  • source

  • axis

  • use_mask – if True (default), use the time mask (part of dim tag) to ignore padding frames

Returns:

tensor with axis removed

returnn.frontend.reduce.reduce_argmin(source: Tensor[T], *, axis: Dim | Sequence[Dim], use_mask: bool = True) Tensor[T][source]

Reduce the tensor along the given axis

Parameters:
  • source

  • axis

  • use_mask – if True (default), use the time mask (part of dim tag) to ignore padding frames

Returns:

tensor with axis removed

returnn.frontend.reduce.reduce_argmax(source: Tensor[T], *, axis: Dim | Sequence[Dim], use_mask: bool = True) Tensor[T][source]

Reduce the tensor along the given axis

Parameters:
  • source

  • axis

  • use_mask – if True (default), use the time mask (part of dim tag) to ignore padding frames

Returns:

tensor with axis removed

returnn.frontend.reduce.reduce_out(source: Tensor, *, mode: str, num_pieces: int, out_dim: Dim | None = None) Tensor[source]

Combination of SplitDimsLayer applied to the feature dim and ReduceLayer applied to the resulting feature dim. This can e.g. be used to do maxout.

Parameters:
  • source

  • mode – “sum” or “max” or “mean”

  • num_pieces – how many elements to reduce. The output dimension will be input.dim // num_pieces.

  • out_dim

Returns:

out, with feature_dim set to new dim

returnn.frontend.reduce.top_k(source: Tensor, *, axis: Dim | Sequence[Dim], k: int | Tensor | None = None, k_dim: Dim | None = None, sorted: bool = True) Tuple[Tensor, Tensor | Sequence[Tensor], Dim][source]

Basically wraps tf.nn.top_k. Returns the top_k values and the indices.

For an input [B,D] with axis=D, the output and indices values are shape [B,K].

It’s somewhat similar to reduce() with max and argmax. The axis dim is reduced and then a new dim for K is added.

Axis can also cover multiple axes, such as [beam,classes]. In that cases, there is not a single “indices” sub-layer, but sub-layers “indices0” .. “indices{N-1}” corresponding to each axis, in the same order.

All other axes are treated as batch dims.

Parameters:
  • source

  • axis – the axis to do the top_k on, which is reduced, or a sequence of axes

  • k – the “K” in “TopK”

  • k_dim – the new axis dim for K. if not provided, will be automatically created.

  • sorted

Returns:

values, indices (sequence if axis is a sequence), k_dim