returnn.tf.frontend_layers.loop

Loop. Provides Loop.

class returnn.tf.frontend_layers.loop.Loop(*, max_seq_len: ~returnn.tensor.tensor.Tensor | None = <class 'returnn.util.basic.NotSpecified'>, optimize_move_layers_out: bool | None = <class 'returnn.util.basic.NotSpecified'>, unroll: bool = <class 'returnn.util.basic.NotSpecified'>, axis: ~returnn.tensor.dim.Dim | None = <class 'returnn.util.basic.NotSpecified'>, debug: bool | None = <class 'returnn.util.basic.NotSpecified'>, name: str = 'loop')[source]

This represents a RecLayer subnetwork in RETURNN, i.e. where the calculation per step is defined explicitly.

(For RecLayer with a predefined unit, see Rec.

Or for example Lstm.)

To define a loop like this pseudo Python code:

x  # given, shape (batch, time, dim)
h = Zeros([batch,dim])()  # initial state, shape (batch,dim)
out = []
for t in range(x.max_seq_len):
  x_lin = Linear(dim)(x[t])
  h_prev = h
  h = Linear(dim)(x_lin + h_prev)
  out.append(h)

h  # final state
out  # shape (time, batch, h_dim)

You would write:

dim = nn.FeatureDim(...)
loop = nn.Loop(axis=...)
loop.state.h = nn.zeros([batch_dim,dim])  # initial state
with loop:
  x_t = loop.unstack(x)
  x_lin = Linear(dim)(x_t)
  loop.state.h = Linear(dim)(x_lin + loop.state.h)
  out = loop.stack(loop.state.h)

state is Loop._StateHolder and manages the recurrent state.

This code must be run within a Module.forward() or with some active global name context (NameCtx).

This API is currently in development, and might change. See: https://github.com/rwth-i6/returnn_common/issues/16

property has_entered_scope: bool[source]
Returns:

whether we have entered the scope, i.e. we define the per-step calculation.

property state: _LoopStateHolder | State[source]

state holder inside the loop

unstack(source: Tensor, *, name: str | None = None) Tensor[source]

Unrolls over the specified axis, and provides each frame in each loop iteration. The axis can be specified globally for the Loop instance (recommended) or locally here (not recommended).

stack(source: Tensor, *, name: str | None = None) Tensor[source]

Accumulates the frames of source within the loop, to make it accessible outside the loop.

last(source: Tensor, *, name: str | None = None) Tensor[source]

Gets the last value from source.

end(source: Tensor, *, include_eos: bool) Tensor[source]

For loops with dynamic ending condition (which might not use unstack), this defines the ending condition.

Parameters:
  • source – the ending condition

  • include_eos – if True, the last() and stack() function include the current ending frame, otherwise not

property max_seq_len: Tensor | None[source]

max seq length in case the length is dynamic via end()

property iter_idx: Tensor[source]

The index of the current iteration, inside the loop. This is a scalar. This always starts with 0.

class returnn.tf.frontend_layers.loop.LoopModule(loop: Loop)[source]

This module is used internally by Loop to create the RETURNN RecLayer for the loop. This module would not be directly used by the user.

By convention, any options to the module are passed to __init__, and potential changing inputs (other tensors) are passed to __call__().