returnn.tf.frontend_layers.make_layer

make layer

returnn.tf.frontend_layers.make_layer.make_layer(layer_dict: Dict[str, Any], *, name: str | Layer | None = None, out: Tensor | None = None, predefined_out_data: Tensor | None = None, name_ctx_ignore_top_stack_frames: int = 0) Tensor[Layer][source]

Creates the layer. This also registers the layer instance in the top name ctx. When no name is given, this assumes that the top name ctx corresponds to this module.

If a layer has params, and you want the param sharing logic, you should instead derive a new class from Module. Usually, you do not need either of these, as all standard layers should already be wrapped, and it should be possible to define any possible logic using that. (If this is not the case, please report an issue.)

Parameters:
  • layer_dict – can contain Tensor instances

  • name – if str: (suggested) layer name. if given, will create a new NameCtx if NameCtx, will use this.

  • out

  • predefined_out_data – normally we can derive the out data automatically. If this should be skipped, you can pass this explicitly.

  • name_ctx_ignore_top_stack_frames – for Layer.current_ctx(). If your calling function creates exactly one single layer, you might want to ignore its stack frame and set ignore_top_stack_frames=1 and also set a name for the layer. If you are potentially creating multiple layers in your calling function, leave the default ignore_top_stack_frames=0. Some postprocessing step might anyway simplify obsolete subnetworks, see naming.

returnn.tf.frontend_layers.make_layer.register_extern_data(data: Tensor[Layer])[source]

Register extern data from root ctx. As a side effect, it registers the given data as extern data, and this will be included when creating the RETURNN config, via NameCtx.get_returnn_config().