returnn.frontend._cache

Cache, to store some data. See Cache.

One use case example is sinusoidal_positional_encoding() and relative_positional_encoding().

class returnn.frontend._cache.Cache(max_size: int)[source]

Cache, intended for internal use of RF functions.

One use case example is sinusoidal_positional_encoding() and relative_positional_encoding().

There are some specific properties we must take care of:

  • Lifetime of values: For graph-based backends, it can only stay alive for the current run ctx. (For eager-based backends, there is no such restriction.)

  • Size: Put some limit, use LRU logic.

  • Dims: Use only weakrefs. Some Dim should not stay alive just because of the cache.

  • Scalar dynamic Dims in eager mode, or static dims: Instead of the Dim, use the dim value for the key (and map the output to the Dim).

  • Tensor as keys: Use weakrefs. Also don’t check by value but by identity.

get(key, default=None)[source]
Parameters:
  • key

  • default

Returns:

entry in cache or default

set(key, value)[source]
Parameters:
  • key

  • value

class returnn.frontend._cache.TensorWrapper(value: Tensor, *, finalize_callback)[source]

Wraps Tensor. Using weakref for the tensor, including also raw_tensor. Equality is given if the identity is the same, for the Tensor itself and the raw_tensor. No value of the tensor is checked.

class returnn.frontend._cache.DimWrapper(dim: Dim, *, finalize_callback)[source]

Wraps Dim. Using weakref for the dim. If the size is scalar and known, equality is given when the size is equal (and dim tag is ignored)