Device

class Device.Device(device, config, blocking=False, num_batches=1, update_specs=None)[source]
Parameters:
  • device (str) – name, “gpu*” or “cpu*”
  • config (Config.Config) – config
  • blocking (bool) – False -> multiprocessing, otherwise its blocking
  • num_batches (int) – num batches to train on this device

:param dict update_specs

alloc_data(shapes, max_ctc_length=0)[source]
Parameters:shapes (dict[str,list[int]]) – by data-key. format usually (time,batch,features)
clear_memory(network)[source]
compute_run(task)[source]
detect_nan(i, node, fn)[source]
dump_model_broken_info(info)[source]
fast_check_model_is_broken_from_result(output, outputs_format)[source]
finish_epoch_stats()[source]
forward(use_trainnet=False)[source]
get_compute_func(task)[source]
get_device_clock()[source]
get_device_memory()[source]
get_device_shaders()[source]
get_memory_info()[source]
get_net_train_params(network)[source]
get_num_updates()[source]
get_task_network()[source]
Return type:LayerNetwork
initialize(config, update_specs=None, json_content=None, train_param_args=None)[source]
is_device_proc()[source]
make_ce_ctc_givens(network)[source]
make_ctc_givens(network)[source]
make_givens(network)[source]
make_input_givens(network)[source]
static make_result_dict(output, outputs_format)[source]
make_sprint_givens(network)[source]
need_reinit(json_content, train_param_args=None)[source]
prepare(network, updater=None, train_param_args=None, epoch=None)[source]

Call this from the main proc before we do anything else. This is called before we start any training, e.g. at the begin of an epoch. :type network: LayerNetwork :type updater: Updater | None :type train_param_args: dict | None

process(asyncTask)[source]
process_inner(device, config, update_specs, asyncTask)[source]
reinit(json_content, train_param_args=None)[source]

:returns len of train_params :rtype: int Reinits for a new network topology. This can take a while because the gradients have to be recomputed.

result()[source]
Return type:(list[numpy.ndarray], list[str] | None)

:returns the outputs and maybe a format describing the output list See self.make_result_dict() how to interpret this list. See self.initialize() where the list is defined.

run(task)[source]
set_learning_rate(learning_rate)[source]
set_net_encoded_params(network_params)[source]

This updates all params, not just the train params.

set_net_params(network)[source]

This updates all params, not just the train params.

startProc(*args, **kwargs)[source]
start_epoch_stats()[source]
sync_net_train_params()[source]
sync_used_targets()[source]

Updates self.used_targets for the host.

terminate()[source]
update_data()[source]
update_memory()[source]
Device.getDevicesInitArgs(config)[source]
Return type:list[dict[str]]
Device.get_current_seq_index_mask(target)[source]
Device.get_current_seq_tags()[source]
Returns:current seq tags (seq names) of current batch. assumes is_device_host_proc()
Return type:list[str]
Device.get_device_attributes()[source]
Device.get_gpu_names()[source]
Device.get_num_devices()[source]
Device.have_gpu()[source]
Device.is_device_host_proc()[source]
Device.is_using_gpu()[source]
Device.sort_strint(txt)[source]
Device.str2int(txt)[source]