TFNetwork

class TFNetwork.ExternData(data=None, default_input='data', default_target='classes')[source]

This holds Data instances for every data-key of external data from the dataset, i.e. the description such as shape and sparsity, etc.

Parameters:data (None|dict[str,dict[str]]) – optional init kwargs for Data
init_from_config(config)[source]
Parameters:config (Config.Config) –
classmethod data_kwargs_from_dataset_key(dataset, key)[source]
Parameters:
Return type:

dict[str]

init_from_dataset(dataset)[source]
Parameters:dataset (Dataset.Dataset) –
check_matched_dataset(dataset, used_data_keys=None)[source]
Parameters:
Returns:

nothing, will assert the check

register_data_from_dict(data)[source]
Parameters:data (dict[str,dict[str]]) – init kwargs for Data
register_data(data)[source]
Parameters:data (Data) – will use data.name as the key
has_data(name)[source]
get_data(name)[source]
get_default_input_data()[source]
get_default_target_data()[source]
get_data_description()[source]
get_queue_args(with_batch_dim, fixed_batch_dim=None)[source]
Parameters:
  • with_batch_dim (bool) –
  • fixed_batch_dim (int|None) –
Returns:

kwargs for tf.Queue.__init__

Return type:

dict[str,list]

get_sorted_data_items()[source]
Return type:list[(str,Data)]
class TFNetwork.TFNetwork(config=None, extern_data=None, rnd_seed=None, train_flag=False, eval_flag=False, search_flag=False, parent_layer=None, parent_net=None, extra_parent_net=None, name=None)[source]
Parameters:
  • config (Config.Config) – only needed to init extern_data if not specified explicitly
  • extern_data (ExternData|None) –
  • rnd_seed (int|None) –
  • train_flag (bool|tf.Tensor) – True if we want to use this model in training, False if in eval, or dynamic
  • eval_flag (bool) – whether to calculate losses. if train_flag is not False, this will be set to True
  • search_flag (bool) – whether we perform a beam-search. see usage
  • parent_layer (TFNetworkLayer.LayerBase|None) –
  • parent_net (TFNetwork|None) –
  • extra_parent_net (TFNetwork|None) –
  • name (str) – only for debugging
get_root_network()[source]
Return type:TFNetwork
get_absolute_name_scope_prefix()[source]
Returns:scope, always with “/” at the end, or “”
Return type:str
construct_from(list_or_dict)[source]
Parameters:| dict[str,dict[str]] list_or_dict (list[dict[str]]) –
construct_from_list(net_list)[source]
Parameters:net_list (list[dict[str]]) – list of layer descriptions
construct_from_dict(net_dict)[source]
Parameters:net_dict (dict[str,dict[str]]) –
construct_extra_net(net_dict, layer_list, search_flag=None)[source]

The purpose is to create another net like self but with different flags, e.g. with search_flag = True. That extra_net can have different losses, which will be added. It will not recreate any already existing layers.

Parameters:
  • net_dict (dict[str,dict[str]]) –
  • layer_list (list[str]) –
  • search_flag (bool|None) –
construct_layer(net_dict, name, get_layer=None, add_layer=None)[source]
Parameters:
  • net_dict (dict[str,dict[str]]) –
  • name (str) – layer name
  • -> LayerBase)|None get_layer (((str)) – optional, for source layers, for transform_config_dict. by default, this wraps self.construct_layer().
  • LayerBase, dict) -> LayerBase) | None add_layer (((str,) – by default self.add_layer
Return type:

LayerBase

add_layer(name, layer_class, **layer_desc)[source]

This will construct the layer given the layer_desc arguments, and add it to the network.

Parameters:
  • name (str) –
  • layer_class ((()->LayerBase)|LayerBase) –
  • layer_desc – contains the kwargs for the layer class. the args should have been transformed via layer_class.transform_config_dict before (see construct_layer). must not contain “name” and “network”, which will be automatically added here. should not contain “output”, which will be initialized to layer_class.get_out_data_from_opts. the layer_class will usually then define the layer.output and its placeholder. there is one notable exception: the InternalLayer, where you predefine the output.
get_extern_data(key, mark_data_key_as_used=True)[source]

Returns Data and add the key to self.used_data_keys if mark_data_key_as_used. :param str key: e.g. “data” or “classes” :param bool mark_data_key_as_used: :rtype: Data

get_seq_tags(mark_data_key_as_used=True)[source]
Parameters:mark_data_key_as_used (bool) – for extern_data
Returns:tensor of shape (batch,) of dtype string, via extern_data
Return type:tf.Tensor
construct_objective()[source]
maybe_construct_objective()[source]
get_objective()[source]
get_total_loss()[source]
Return type:int|tf.Tensor
Returns:0 if no loss, or tf.Tensor
get_total_constraints()[source]
get_used_targets()[source]
Returns:sorted list of targets
Return type:list[str]
get_default_target()[source]
Returns:e.g. “classes”
Return type:str
get_output_layers()[source]
Return type:list[LayerBase]
get_default_output_layer_name()[source]
Return type:str|None
Returns:default output layer name if there is one, or None
get_default_output_layer(must_exist=True)[source]
Parameters:must_exist (bool) – if it does not exist, will raise an exception
Return type:LayerBase|None
Returns:the default output layer
get_layer(layer_name)[source]

Normally just self.layers[layer_name] but with some extra logic added, such as resolving “base:” prefix to the parent network. Raises LayerNotFound if the layer is not found.

Parameters:layer_name (str) –
Return type:LayerBase
get_params_list()[source]
Returns:list of model variables, i.e. from all the layers, excluding auxiliary vars like global_step
Return type:list[tf.Variable]
get_saveable_param_replace_dict()[source]
Returns:params and saveable_param_replace resolved, union of all layers
Return type:dict[str,tf.Variable|tensorflow.python.training.saver.BaseSaverBuilder.SaveableObject]
get_saveable_params_list()[source]
Returns:list of model variables or SaveableObject, to save/restore
Return type:list[tf.Variable|tensorflow.python.training.saver.BaseSaverBuilder.SaveableObject]
get_params_nested_dict()[source]
Returns:dict: layer_name -> param_name -> variable
Return type:dict[str,dict[str,tf.Variable]]
get_trainable_params()[source]
Returns:list of variables
Return type:list[tf.Variable]
declare_train_params(hidden_layer_selection=None, with_output=None)[source]
get_num_params()[source]
Returns:number of model parameters, i.e. total dimension
Return type:int
initialize_params(session)[source]
Parameters:session (tf.Session) –

Note: This will create a new node to the graph for each call! And it will overwrite also the already initialized variables. So you should call this only once after network construction and before you maybe load some of the params from external sources. If you know that you will load all params explicitly, you would not need to call this function.

get_var_assigner(var)[source]
Parameters:var (tf.Variable) –
get_param_values_dict(session)[source]
Parameters:session (tf.Session) –
Returns:dict: layer_name -> param_name -> variable numpy array
Return type:dict[str,dict[str,numpy.ndarray]]

Note that this excludes auxiliary params.

set_param_values_by_dict(values_dict, ignore_non_existing=False, **kwargs)[source]
Parameters:
  • values_dict (dict[str,dict[str,numpy.ndarray]]) –
  • ignore_non_existing (bool) –
  • kwargs – passed to LayerBase.set_param_values_by_dict()

Note that this excludes auxiliary params.

get_auxiliary_params()[source]
get_params_serialized(session)[source]
Parameters:session (tf.Session) –
Return type:TFNetworkParamsSerialized
set_params_by_serialized(serialized, session, **kwargs)[source]
Parameters:
set_global_train_step(step, session)[source]
Parameters:
  • step (int) –
  • session (tf.Session) –
get_global_train_step(session)[source]
Parameters:session (tf.Session) –
Return type:int
get_epoch_step()[source]
Returns:int64
Return type:tf.Tensor
reset_saver()[source]

Resets the tf.train.Saver object which will be used for load_params_from_file() and save_params_to_file(). Warning: Don’t repeat that too often as it will always create new ops in the computation graph.

save_params_to_file(filename, session)[source]

Will save the model parameters to the filename. Note that the model parameters live inside the current TF session. :param str filename: :param tf.Session session:

load_params_from_file(filename, session)[source]

Will save the model parameters to the filename. Note that the model parameters live inside the current TF session. :param str filename: :param tf.Session session:

print_network_info(name='Network')[source]
cond_on_train(fn_train, fn_eval)[source]

Uses fn_train() or fn_eval() base on self.train_flag. It will be a branched evaluation.

Parameters:
  • fn_train (()->tf.Tensor) –
  • fn_eval (()->tf.Tensor) –
Returns:

fn_train() if self.train_flag else fn_eval()

Return type:

tf.Tensor

get_search_choices(sources=None, src=None, base_search_choice=None, _visited=None)[source]

Recursively searches through all sources, and if there is a ChoiceLayer / any layer with search_choices, returns it. Could also go to the parent network. If there are multiple, it assumes they are on the same search-sequence in the search-tree and it will return the last one.

Parameters:
  • src (LayerBase|None) –
  • base_search_choice (LayerBase|None) –
  • sources (list[LayerBase]|None) –
  • _visited (set[LayerBase]|None) – keep track about visited layers in case there are circular deps
Returns:

(direct or indirect) source LayerBase which has search_choices, or None

Return type:

LayerBase|None

debug_search_choices(base_search_choice)[source]
Parameters:base_search_choice (LayerBase) –
get_data_batch_dim()[source]

Get the batch-dim size, i.e. amount of sequences in the current batch. Consider that the data tensor is usually of shape [batch, time, dim], this would return shape(data)[0].

The code currently assumes that the batch-dim can be taken from the extern data. If it does not have that available for some reason (e.g. some subnetwork), it will try some alternative sources and assumes that they have the correct batch-dim.

Note that the batch-dim usually stays always the same across the whole network and also every individual batch sequence will stay related. One notable exception of this is the choice layer, where the batch-dim will get expanded by the beam search if search is used, as well as in all following layers, until there is a decide layer.

Returns:int scalar tensor which states the batch-dim
Return type:int|tf.Tensor
set_rec_step_info(i, end_flag=None, seq_lens=None)[source]

Used by _SubnetworkRecCell. :param tf.Tensor i: scalar, int32, current step (time) :param tf.Tensor|None end_flag: (batch,), bool, says that the current sequence has ended :param tf.Tensor|None seq_lens: (batch,) int32, seq lens

is_inside_rec_layer()[source]
Returns:whether we are inside a RecLayer. see get_rec_parent_layer()
Return type:bool
get_rec_parent_layer()[source]
Returns:if we are a subnet of a RecLayer, will return the RecLayer instance
Return type:TFNetworkRecLayer.RecLayer|None
have_rec_step_info()[source]
get_rec_step_info(must_exist=True)[source]
Parameters:must_exist (bool) – if True, will throw exception if not available
Return type:TFNetworkRecLayer.RecStepInfoLayer|None
get_rec_step_index()[source]

Assumes that have_rec_step_info is True.

Return type:tf.Tensor
Returns:scalar, int32
get_config(consider_global_config=True, fallback_dummy_config=True)[source]
Parameters:
  • consider_global_config (bool) – if no config is set, check for global config
  • fallback_dummy_config (bool) – if no config, return a new empty Config, otherwise return None
Return type:

Config.Config|None

static register_post_control_dependencies(deps)[source]

Will register the control dependencies or globally for a session run on this network. This can e.g. be called inside self.post_init. We use UPDATE_OPS, as that is also e.g. used by batchnorm. See:

Parameters:deps (list[tf.Tensor|tf.Operation]) –
Returns:nothing
static get_post_control_dependencies()[source]
classmethod get_network_stack()[source]
Return type:list[TFNetwork]
classmethod get_current_network(must_exist=True)[source]
Parameters:must_exist (bool) –
Return type:TFNetwork|None
register_network_scope()[source]
class TFNetwork.TFNetworkParamsSerialized(values_dict, global_train_step)[source]

Holds all the params as numpy arrays, including auxiliary params.

Parameters:
  • values_dict (dict[str,dict[str,numpy.ndarray]]) – dict: layer_name -> param_name -> variable numpy array
  • global_train_step (int) –
class TFNetwork.LossHolder(name, loss, layer_output, reduce_func=None, layer=None, loss_value=None, error_value=None, norm_factor=None, only_on_eval=None, network=None)[source]

This object just keeps a reference to the loss/error value, and does the necessary logic to collect it, and also the normalization logic. Every new computation (nodes in the computation graph) must be constructed on demand, to allow first to collect all possible losses without calculating them, and then calculating them in the right context (e.g. inside a while_loop, or so).

After construction, you should call init() before usage, in case you do not provide layer here.

Parameters:
  • name (str) – The name uniquely identifies the loss. Earlier, this was the same as the layer name. This is still true for simple cases, but for losses coming from a subnetwork or other extended losses, it can be something else. It could look like “output”, or “output/sublayer”.
  • layer (LayerBase) – We can always point to a layer where this comes from (either in the subnet, or the parent layer).
  • layer_output (Data) – template describing the layer output
  • network (TFNetwork) – for which network to create this LossHolder. might be different from layer.network
  • loss (TFNetworkLayer.Loss) –
  • reduce_func (((tf.Tensor)->tf.Tensor)|None) – if given, will overwrite the reduce func for the loss. By default, every loss_value and error_value is a scalar (sum or average over the batches, and over the frames for frame-wise losses). However, if you provide reduce_func = TFUtil.identity, you can get the unreduced tensor.
  • loss_value (tf.Tensor|None) –
  • error_value (tf.Tensor|None) –
  • norm_factor (tf.Tensor) –
  • only_on_eval (bool) –
init(layer)[source]
Parameters:layer (LayerBase) –
Returns:self
Return type:LossHolder
set_layer_loss_error_value(layer, loss_value, error_value)[source]
Parameters:
  • layer (LayerBase) –
  • loss_value (tf.Tensor|None) –
  • error_value (tf.Tensor|None) –
get_layer()[source]
Returns:layer. assumes that it is set
Return type:LayerBase
get_only_on_eval()[source]
Returns:only_on_eval flag. assumes that it is set
Return type:bool
get_tf_name()[source]
Returns:name which can be used for a TF op, thus contains no “/” or other special chars
Return type:str
get_loss_value()[source]
Returns:loss value
Return type:tf.Tensor|None
get_loss_value_for_fetch()[source]
Returns:loss value for fetch
Return type:tf.Tensor|None
get_loss_value_for_objective()[source]
Returns:loss value for objective
Return type:tf.Tensor|None
get_error_value()[source]
Returns:error value for fetch
Return type:tf.Tensor|None
get_norm_factor()[source]
Returns:norm factor for loss and error
Return type:tf.Tensor
copy_new_base(name=None, layer=None, network=None, reduce_func=None)[source]
Parameters:
  • layer (LayerBase) –
  • network (TFNetwork) –
  • name (str) –
  • reduce_func (((tf.Tensor)->tf.Tensor)|None) –
Returns:

new copy of LossHolder

Return type:

LossHolder

exception TFNetwork.NetworkConstructionDependencyLoopException(network, layer_name, constructing_layers, net_dict)[source]

This is raised when there is a dependency loop in the network construction.

Parameters:
  • network (TFNetwork) –
  • layer_name (str) –
  • constructing_layers (list[str]) –
  • net_dict (dict[str,dict[str]]) –
exception TFNetwork.LayerNotFound[source]

Via TFNetwork.get_layer().

TFNetwork.help_on_tf_exception(exception, feed_dict, meta_step_info, extern_data, file=<_io.TextIOWrapper name='<stdout>' mode='w' encoding='UTF-8'>)[source]
Parameters:
  • exception (tf.errors.OpError|BaseException) –
  • feed_dict (dict[tf.Tensor,numpy.ndarray]) –
  • meta_step_info (dict[str]) –
  • extern_data (ExternData) –
  • file (typing.IO[str]) –
class TFNetwork.CustomCheckpointLoader(filename, saveable_params, params_prefix='', load_if_prefix='', network=None)[source]

This uses tf.train.NewCheckpointReader. It would do automatic conversions if needed, e.g. between different LSTM implementations. It tries to automatically resolve renames, similar to this:

Also see:

Parameters:
  • filename (str) – filepattern for NewCheckpointReader
  • saveable_params (list[tf.Variable|tensorflow.python.training.saver.BaseSaverBuilder.SaveableObject]) –
  • load_if_prefix (str) – if given, only load variables with a name containing this string. the variables in the file are expected to have the same name but without this string.
  • network (TFNetwork) –
class CustomParamImporter(layer, checkpoint_loader)[source]
Parameters:
assign_var(var, session)[source]
Parameters:
  • var (tf.Variable) –
  • session (tf.Session) –
class VariableValue(value=None, custom_param_importer=None)[source]
Parameters:
assign_var(var, session)[source]
Parameters:
  • var (tf.Variable) –
  • session (tf.Session) –
get_variable_value_map()[source]
Returns:var -> numpy array
Return type:dict[tf.Variable,CustomCheckpointLoader.VariableValue]
load_now(session)[source]
Parameters:session (tf.Session) –
Returns:nothing, will assign the variables in the session
set_as_custom_init()[source]
TFNetwork.set_custom_post_init(var, func)[source]

It registers the provided func such that it gets called for this variable in TFNetwork.initialize_params().

Parameters:
  • var (tf.Variable) –
  • func ((tf.Session)->None) –