returnn.tf.frontend_layers._backend
¶
High-level backend for RETURNN layers
- class returnn.tf.frontend_layers._backend.ReturnnLayersBackend[source]¶
RETURNN layers backend (using TF), where raw_tensor represents a RETURNN layer
- static while_loop(cond: Callable[[S], bool | Tensor], body: Callable[[S], S], initial: S) S [source]¶
while loop
- static set_random_state(state: Dict[str, bytes])[source]¶
- Parameters:
state – as returned by
get_random_state()
. This might not always be successful (e.g. different hardware, different backend version), so the calling code should always have called set_random_seed before to have the random generators in a reasonable fallback state.
- static make_output_tensor(tensor: Tensor, dims: Sequence[Dim], *, name: str) Tensor [source]¶
only func where we have explicitly defined dim order in the output
- static set_requires_gradient(tensor: Tensor)[source]¶
set requires gradient; not needed for TensorFlow, will always calculate whatever is needed
- static scaled_gradient_ext(x: Tensor, *, scale: float | Tensor = 1.0, shift: float | Tensor | None = None, scale_shift_by_sum_over_axis: Dim | None = None)[source]¶
scaled gradient ext
- static merge_dims(source: Tensor, *, dims: Sequence[Dim], out_dim: Dim) Tensor [source]¶
Merges a list of axes into a single one. (Flatten the dims.) E.g. input is (batch, width, height, dim) and dims=(width,height), then we get (batch, width*height, dim). Or input is (batch, time, height, dim) and axes=(height,dim), then we get (batch, time, height*dim).
- Parameters:
source
dims
out_dim
- Returns:
tensor
- static split_dims(source: Tensor, *, axis: Dim, dims: Sequence[Dim], pad_to_multiples: bool | None = None, pad_value: None | int | float = None) Tensor [source]¶
split dims
- static concat(*sources: Tuple[Tensor, Dim], allow_broadcast: bool = False, out_dim: Dim) Tensor [source]¶
- static pad(source: Tensor, *, axes: Sequence[Dim], padding: Sequence[Tuple[Dim | int | Tensor, Dim | int | Tensor]], out_dims: Sequence[Dim], handle_dynamic_dims: bool, mode: str = 'constant', value: int | float | complex | number | ndarray | bool | str | Tensor | None = None) Tensor [source]¶
- static log_softmax(tensor: Tensor, *, axis: Dim, use_mask: bool = True) Tensor [source]¶
log softmax
- static softmax_cross_entropy_with_logits(*, logits: Tensor, targets: Tensor, axis: Dim)[source]¶
Efficient cross entropy.
- Parameters:
logits – target estimates given as inputs to softmax (i.e. unnormalized)
targets – probabilities, i.e. normalized, can also be sparse
axis – class labels dim over which softmax is computed
- Returns:
cross entropy (same Dims as ‘logits’ but without ‘axis’)
- static ctc_loss(*, logits: Tensor, logits_normalized: bool = False, targets: Tensor, input_spatial_dim: Dim, targets_spatial_dim: Dim, blank_index: int, max_approx: bool = False) Tensor [source]¶
CTC
- static create_parameter_raw(tensor: Parameter, *, device: str | None = None) Layer [source]¶
create parameter
- static set_parameter_initial_value(param: Parameter[Layer], value: None | Tensor | int | float | complex | number | ndarray | bool | str) None [source]¶
set parameter initial value
- static set_parameter_trainable(param: Parameter, trainable: bool) None [source]¶
set parameter trainable
- static parameter_assign(param: Parameter, value: Tensor, *, op: str = 'assign') None [source]¶
param assign
- static convert_to_tensor(value: Tensor | Layer | int | float | complex | number | ndarray | bool | str, *, dims: Sequence[Dim], dtype: str, sparse_dim: Dim | None = None, feature_dim: Dim | None = None, device: str | None = None, name: str | None = None) Tensor[Layer] [source]¶
convert to tensor
- static full(dims: Sequence[Dim], fill_value: int | float | complex | number | ndarray | bool | str | Tensor, *, dtype: str, device: str | None = None, sparse_dim: Dim | None = None, feature_dim: Dim | None = None) Tensor [source]¶
- classmethod compare(a: Tensor | int | float | complex | number | ndarray | bool | str, kind: str, b: Tensor | int | float | complex | number | ndarray | bool | str, *, allow_broadcast_all_sources: bool | None = None, dim_order: Sequence[Dim] | None = None) Tensor [source]¶
- classmethod combine(a: Tensor | int | float | complex | number | ndarray | bool | str, kind: str, b: Tensor | int | float | complex | number | ndarray | bool | str, *, allow_broadcast_all_sources: bool | None = None, dim_order: Sequence[Dim] | None = None) Tensor [source]¶
- static gather(source: Tensor, *, indices: Tensor | int, axis: Dim, clip_to_valid: bool = False) Tensor [source]¶
- static slice(source: Tensor, *, axis: Dim, start: int | Tensor | None = None, end: int | Tensor | None = None, step: int | Tensor | None = None, size: int | Tensor | Dim | None = None, out_dim: Dim) Tensor [source]¶
- static where(cond: Tensor, true_: Tensor | int | float | complex | number | ndarray | bool | str, false_: Tensor | int | float | complex | number | ndarray | bool | str, *, allow_broadcast_all_sources: bool = False) Tensor [source]¶
- static clip_by_value(x: Tensor, clip_value_min: Tensor | int | float | complex | number | ndarray | bool | str, clip_value_max: Tensor | int | float | complex | number | ndarray | bool | str, *, allow_broadcast_all_sources: bool = False) Tensor [source]¶
clip by value
- static matmul(a: Tensor, b: Tensor, *, reduce: Dim | Sequence[Dim], use_mask: bool = True) Tensor [source]¶
- static range_over_dim(dim: Dim, *, dtype: str | None = None, device: str | None = None) Tensor [source]¶
range over dim
- static replace_dim(source: Tensor, *, in_dim: Dim, out_dim: Dim) Tensor [source]¶
- Parameters:
source
in_dim
out_dim
- Returns:
source with in_dim replaced by out_dim.
- static reduce(source: Tensor, *, mode: str, axis: Dim | Sequence[Dim], use_mask: bool = True) Tensor [source]¶
Reduce
- static top_k(source: Tensor, *, axis: Dim | Sequence[Dim], k: int | Tensor, k_dim: Dim | None = None, sorted: bool = True) Tuple[Tensor, Tensor | Sequence[Tensor], Dim] [source]¶
- static random_journal_replay(journal: RandomJournal)[source]¶
Replays the journal. At exit, the journal is cleared, and we check that we replayed everything.
- static random(*, dims: Sequence[Dim], dtype: str, device: str | None = None, sparse_dim: Dim | None = None, feature_dim: Dim | None = None, distribution: str, mean: int | float | Tensor | None = None, stddev: int | float | Tensor | None = None, bound: int | float | Tensor | None = None, minval: int | float | Tensor | None = None, maxval: int | float | Tensor | None = None, seed: int | Sequence[int] | ndarray | None = None, algorithm: str | None = None, explicit_state: Tensor | None = None, auto_update_state: bool | None = None, static: bool | None = None, out: Tensor | None = None) Tensor [source]¶
- static masked_select(tensor: Tensor, *, mask: Tensor, dims: Sequence[Dim], out_dim: Dim | None = None) Tuple[Tensor, Dim] [source]¶
- Parameters:
tensor
mask
dims – the order of the dims defines the format. those dims should be exactly the dims of the mask.
out_dim
- Returns:
tensor where all dims in mask/dims are removed and replaced by a new dim. the new dim is also returned. if mask==True for all elements, the returned tensor would be simply the flattened input tensor.
- static batch_norm(source: Tensor, *, in_dim: Dim | Sequence[Dim], running_mean: Tensor, running_variance: Tensor, gamma: Tensor | None, beta: Tensor | None, epsilon: float, momentum: float, affine: bool, use_mask: bool) Tensor [source]¶
batch norm
- static conv(source: Tensor, *, in_dim: Dim, out_dim: Dim, in_spatial_dims: Sequence[Dim], out_spatial_dims: Sequence[Dim] | None = None, filter: Tensor, filter_size: Sequence[Dim], padding: str | int | Sequence[int], strides: int | Sequence[int] | None = None, dilation_rate: int | Sequence[int] | None = None, groups: int | None = None, bias: Tensor | None = None) Tuple[Tensor, Sequence[Dim]] [source]¶
- static transposed_conv(source: Tensor, *, in_dim: Dim, out_dim: Dim, in_spatial_dims: Sequence[Dim], out_spatial_dims: Sequence[Dim] | None = None, filter: Tensor, filter_size: Sequence[Dim], padding: str, remove_padding: Sequence[int] | int = 0, output_padding: Sequence[int | None] | int | None = None, strides: Sequence[int] | None = None, bias: Tensor | None = None) Tuple[Tensor, Sequence[Dim]] [source]¶
transposed conv
- static pool(source: Tensor, *, mode: str, pool_size: Sequence[int], padding: str | int | Sequence[int] = 'valid', dilation_rate: Sequence[int] | int = 1, strides: Sequence[int], in_spatial_dims: Sequence[Dim], out_spatial_dims: Sequence[Dim] | None = None) Tuple[Tensor, Sequence[Dim]] [source]¶