returnn.tf.frontend_layers.make_layer
#
make layer
- returnn.tf.frontend_layers.make_layer.make_layer(layer_dict: Dict[str, Any], *, name: str | Layer | None = None, out: Tensor | None = None, predefined_out_data: Tensor | None = None, name_ctx_ignore_top_stack_frames: int = 0) Tensor[Layer] [source]#
Creates the layer. This also registers the layer instance in the top name ctx. When no name is given, this assumes that the top name ctx corresponds to this module.
If a layer has params, and you want the param sharing logic, you should instead derive a new class from
Module
. Usually, you do not need either of these, as all standard layers should already be wrapped, and it should be possible to define any possible logic using that. (If this is not the case, please report an issue.)- Parameters:
layer_dict – can contain
Tensor
instancesname – if str: (suggested) layer name. if given, will create a new
NameCtx
if NameCtx, will use this.out –
predefined_out_data – normally we can derive the out data automatically. If this should be skipped, you can pass this explicitly.
name_ctx_ignore_top_stack_frames – for
Layer.current_ctx()
. If your calling function creates exactly one single layer, you might want to ignore its stack frame and set ignore_top_stack_frames=1 and also set a name for the layer. If you are potentially creating multiple layers in your calling function, leave the default ignore_top_stack_frames=0. Some postprocessing step might anyway simplify obsolete subnetworks, seenaming
.