returnn.frontend.parametrize
¶
Parametrize some parameters, e.g. to implement weight dropout, variational noise, weight norm, etc.
We follow the PyTorch parametrization API and also borrow some code.
https://github.com/rwth-i6/returnn/issues/1518
- returnn.frontend.parametrize.register_parametrization(module: Module, param_name: str, parametrization: _ParametrizationTransform | _ParametrizationWithAssign | _ParametrizationWithoutAssign, *, keep_existing_param: bool = True) Module [source]¶
Register parametrization for a tensor (parameter) in a module.
- Parameters:
module
param_name
parametrization
keep_existing_param –
- True: the original parameter stays in there,
- and parametrization will be called with the original parameter as an argument::
parametrization(orig_param)
In this case, parametrization must not have own parameters. This is useful for potential optional transformations, e.g. weight dropout or variational noise.
- False: the original parameter will be removed, and this will be a submodule,
which can have its own parameters. It will be called without arguments:
parametrization()
- returnn.frontend.parametrize.remove_parametrization(module: Module, param_name: str) Module [source]¶
Remove parametrization for a tensor (parameter) in a module.
- returnn.frontend.parametrize.is_parametrized(module: Module, param_name: str | None = None) bool [source]¶
Returns
True
if module has an active parametrization.If the argument
tensor_name
is specified, returnsTrue
ifmodule[tensor_name]
is parametrized.- Args:
module: module to query param_name: attribute in the module to query
Default:
None