TFNetwork

class TFNetwork.ExternData(data=None, default_input='data', default_target='classes')[source]

This holds Data instances for every data-key of external data from the dataset, i.e. the description such as shape and sparsity, etc.

Parameters:data (None|dict[str,dict[str]]) – optional init kwargs for Data
init_from_config(config)[source]
Parameters:config (Config.Config) –
init_from_dataset(dataset)[source]
Parameters:dataset (Dataset.Dataset) –
check_matched_dataset(dataset, used_data_keys=None)[source]
Parameters:
Returns:

nothing, will assert the check

register_data_from_dict(data)[source]
Parameters:data (dict[str,dict[str]]) – init kwargs for Data
register_data(data)[source]
Parameters:data (Data) – will use data.name as the key
has_data(name)[source]
get_data(name)[source]
get_default_input_data()[source]
get_default_target_data()[source]
get_data_description()[source]
get_queue_args(with_batch_dim, fixed_batch_dim=None)[source]
Parameters:
  • with_batch_dim (bool) –
  • fixed_batch_dim (int|None) –
Returns:

kwargs for tf.Queue.__init__

Return type:

dict[str,list]

get_sorted_data_items()[source]
Return type:list[(str,Data)]
class TFNetwork.TFNetwork(config=None, extern_data=None, rnd_seed=42, train_flag=False, eval_flag=False, search_flag=False, parent_layer=None, parent_net=None, extra_parent_net=None, name=None)[source]
Parameters:
  • config (Config.Config) – only needed to init extern_data if not specified explicitly
  • extern_data (ExternData|None) –
  • rnd_seed (int) –
  • train_flag (bool|tf.Tensor) – True if we want to use this model in training, False if in eval, or dynamic
  • eval_flag (bool) – whether to calculate losses. if train_flag is not False, this will be set to True
  • search_flag (bool) – whether we perform a beam-search. see usage
  • parent_layer (TFNetworkLayer.LayerBase|None) –
  • parent_net (TFNetwork|None) –
  • extra_parent_net (TFNetwork|None) –
  • name (str) – only for debugging
get_absolute_name_scope_prefix()[source]
Returns:scope, always with “/” at the end, or “”
Return type:str
construct_from(list_or_dict)[source]
Parameters:| dict[str,dict[str]] list_or_dict (list[dict[str]]) –
construct_from_list(net_list)[source]
Parameters:net_list (list[dict[str]]) – list of layer descriptions
construct_from_dict(net_dict)[source]
Parameters:net_dict (dict[str,dict[str]]) –
construct_extra_net(net_dict, layer_list, search_flag=None)[source]

The purpose is to create another net like self but with different flags, e.g. with search_flag = True. That extra_net can have different losses, which will be added.

Parameters:
  • net_dict (dict[str,dict[str]]) –
  • layer_list (list[str]) –
  • search_flag (bool|None) –
construct_layer(net_dict, name, get_layer=None, add_layer=None)[source]
Parameters:
  • net_dict (dict[str,dict[str]]) –
  • name (str) – layer name
  • -> LayerBase)|None get_layer (((str)) – optional, for source layers, for transform_config_dict. by default, this wraps self.construct_layer().
  • LayerBase, dict) -> LayerBase) | None add_layer (((str,) – by default self.add_layer
Return type:

LayerBase

add_layer(name, layer_class, **layer_desc)[source]

This will construct the layer given the layer_desc arguments, and add it to the network.

Parameters:
  • name (str) –
  • layer_class ((()->LayerBase)|LayerBase) –
  • layer_desc – contains the kwargs for the layer class. the args should have been transformed via layer_class.transform_config_dict before (see construct_layer). must not contain “name” and “network”, which will be automatically added here. should not contain “output”, which will be initialized to layer_class.get_out_data_from_opts. the layer_class will usually then define the layer.output and its placeholder. there is one notable exception: the InternalLayer, where you predefine the output.
get_extern_data(key, mark_data_key_as_used=True)[source]

Returns Data and add the key to self.used_data_keys if mark_data_key_as_used. :param str key: e.g. “data” or “classes” :param bool mark_data_key_as_used: :rtype: Data

get_seq_tags(mark_data_key_as_used=True)[source]
Parameters:mark_data_key_as_used (bool) – for extern_data
Returns:tensor of shape (batch,) of dtype string, via extern_data
Return type:tf.Tensor
construct_objective()[source]
maybe_construct_objective()[source]
get_all_losses()[source]
get_all_errors()[source]
Return type:dict[str|tf.Tensor]
Returns:layer-name -> error dict. contains only the layers which have some error value
get_objective()[source]
get_total_loss()[source]
Return type:int|tf.Tensor
Returns:0 if no loss, or tf.Tensor
get_total_constraints()[source]
get_used_targets()[source]
Returns:sorted list of targets
Return type:list[str]
get_default_target()[source]
Returns:e.g. “classes”
Return type:str
get_output_layers()[source]
Return type:list[LayerBase]
get_default_output_layer_name()[source]
Return type:str|None
Returns:default output layer name if there is one, or None
get_default_output_layer(must_exist=True)[source]
Parameters:must_exist (bool) – if it does not exist, will raise an exception
Return type:LayerBase|None
Returns:the default output layer
get_layer(layer_name)[source]

Normally just self.layers[layer_name] but with some extra logic added, such as resolving “base:” prefix to the parent network. Raises LayerNotFound if the layer is not found.

Parameters:layer_name (str) –
Return type:LayerBase
get_params_list()[source]
Returns:list of model variables, i.e. from all the layers, excluding auxiliary vars like global_step
Return type:list[tf.Variable]
get_saveable_params_list()[source]
Returns:list of model variables or SaveableObject, to save/restore
Return type:list[tf.Variable|tensorflow.python.training.saver.BaseSaverBuilder.SaveableObject]
get_params_nested_dict()[source]
Returns:dict: layer_name -> param_name -> variable
Return type:dict[str,dict[str,tf.Variable]]
get_trainable_params()[source]
Returns:list of variables
Return type:list[tf.Variable]
declare_train_params(hidden_layer_selection=None, with_output=None)[source]
get_num_params()[source]
Returns:number of model parameters, i.e. total dimension
Return type:int
initialize_params(session)[source]
Parameters:session (tf.Session) –

Note: This will create a new node to the graph for each call! And it will overwrite also the already initialized variables. So you should call this only once after network construction and before you maybe load some of the params from external sources. If you know that you will load all params explicitly, you would not need to call this function.

get_var_assigner(var)[source]
Parameters:var (tf.Variable) –
get_param_values_dict(session)[source]
Parameters:session (tf.Session) –
Returns:dict: layer_name -> param_name -> variable numpy array
Return type:dict[str,dict[str,numpy.ndarray]]

Note that this excludes auxiliary params.

set_param_values_by_dict(values_dict, **kwargs)[source]
Parameters:
  • values_dict (dict[str,dict[str,numpy.ndarray]]) –
  • kwargs – passed to LayerBase.set_param_values_by_dict()

Note that this excludes auxiliary params.

get_auxiliary_params()[source]
get_params_serialized(session)[source]
Parameters:session (tf.Session) –
Return type:TFNetworkParamsSerialized
set_params_by_serialized(serialized, session, **kwargs)[source]
Parameters:
set_global_train_step(step, session)[source]
Parameters:
  • step (int) –
  • session (tf.Session) –
get_global_train_step(session)[source]
Parameters:session (tf.Session) –
Return type:int
reset_saver()[source]

Resets the tf.train.Saver object which will be used for load_params_from_file() and save_params_to_file(). Warning: Don’t repeat that too often as it will always create new ops in the computation graph.

save_params_to_file(filename, session)[source]

Will save the model parameters to the filename. Note that the model parameters live inside the current TF session. :param str filename: :param tf.Session session:

load_params_from_file(filename, session)[source]

Will save the model parameters to the filename. Note that the model parameters live inside the current TF session. :param str filename: :param tf.Session session:

print_network_info(name='Network')[source]
cond_on_train(fn_train, fn_eval)[source]

Uses fn_train() or fn_eval() base on self.train_flag. It will be a branched evaluation.

Parameters:
  • fn_train (()->tf.Tensor) –
  • fn_eval (()->tf.Tensor) –
Returns:

fn_train() if self.train_flag else fn_eval()

Return type:

tf.Tensor

get_search_choices(sources=None, src=None, base_search_choice=None, _visited=None)[source]

Recursively searches through all sources, and if there is a ChoiceLayer / any layer with search_choices, returns it. Could also go to the parent network. If there are multiple, it assumes they are on the same search-sequence in the search-tree and it will return the last one.

Parameters:
  • src (LayerBase|None) –
  • base_search_choice (LayerBase|None) –
  • sources (list[LayerBase]|None) –
  • _visited (set[LayerBase]|None) – keep track about visited layers in case there are circular deps
Returns:

(direct or indirect) source LayerBase which has search_choices, or None

Return type:

LayerBase|None

debug_search_choices(base_search_choice)[source]
Parameters:base_search_choice (LayerBase) –
get_data_batch_dim()[source]

Get the batch-dim size, i.e. amount of sequences in the current batch. Consider that the data tensor is usually of shape [batch, time, dim], this would return shape(data)[0].

The code currently assumes that the batch-dim can be taken from the extern data. If it does not have that available for some reason (e.g. some subnetwork), it will try some alternative sources and assumes that they have the correct batch-dim.

Note that the batch-dim usually stays always the same across the whole network and also every individual batch sequence will stay related. One notable exception of this is the choice layer, where the batch-dim will get expanded by the beam search if search is used, as well as in all following layers, until there is a decide layer.

Returns:int scalar tensor which states the batch-dim
Return type:int|tf.Tensor
set_rec_step_info(i, end_flag=None, seq_lens=None)[source]

Used by _SubnetworkRecCell. :param tf.Tensor i: scalar, int32, current step (time) :param tf.Tensor|None end_flag: (batch,), bool, says that the current sequence has ended :param tf.Tensor|None seq_lens: (batch,) int32, seq lens

have_rec_step_info()[source]
get_rec_step_info()[source]

Assumes that have_rec_step_info is True. :rtype: TFNetworkRecLayer.RecStepInfoLayer

get_rec_step_index()[source]

Assumes that have_rec_step_info is True.

Return type:tf.Tensor
Returns:scalar, int32
get_config(consider_global_config=True, fallback_dummy_config=True)[source]
Parameters:
  • consider_global_config (bool) – if no config is set, check for global config
  • fallback_dummy_config (bool) – if no config, return a new empty Config, otherwise return None
Return type:

Config.Config|None

class TFNetwork.TFNetworkParamsSerialized(values_dict, global_train_step)[source]

Holds all the params as numpy arrays, including auxiliary params.

Parameters:
  • values_dict (dict[str,dict[str,numpy.ndarray]]) –
  • global_train_step (int) –
exception TFNetwork.NetworkConstructionDependencyLoopException(network, layer_name, constructing_layers, net_dict)[source]

This is raised when there is a dependency loop in the network construction.

Parameters:
  • network (TFNetwork) –
  • layer_name (str) –
  • constructing_layers (list[str]) –
  • net_dict (dict[str,dict[str]]) –
exception TFNetwork.LayerNotFound[source]

Via TFNetwork.get_layer().

TFNetwork.help_on_tf_exception(exception, feed_dict, meta_step_info, extern_data, file=<_io.TextIOWrapper name='<stdout>' mode='w' encoding='UTF-8'>)[source]
Parameters:
  • exception (tf.errors.OpError) –
  • feed_dict (dict[tf.Tensor,numpy.ndarray]) –
  • meta_step_info (dict[str]) –
  • extern_data (ExternData) –
  • file (typing.IO[str]) –