returnn.engine.base
¶
Provides EngineBase
.
- class returnn.engine.base.EngineBase(config: Config | None = None)[source]¶
Base class for a backend engine, such as
TFEngine.Engine
.- Parameters:
config
- init_network_from_config(config: Config | None = None)[source]¶
Initialize network/model
- Parameters:
config
- init_train_from_config(config: Config | None = None)[source]¶
Initialize all engine parts needed for training
- Parameters:
config
- classmethod config_get_final_epoch(config)[source]¶
- Parameters:
config (returnn.config.Config)
- Return type:
int
- classmethod get_existing_models(config: Config, *, for_training: bool | None = None)[source]¶
- Parameters:
config
for_training – if True, will only return models which are suitable for resuming training. E.g. in case of PyTorch, it means that the optimizer state should be present. By default, will be True if the task is “train”.
- Returns:
dict epoch -> model filename (without extension)
- Return type:
dict[int,str]
- classmethod get_start_epoch_no_existing_model(config: Config) int [source]¶
- Returns:
start epoch if no model exists
- classmethod get_epoch_model(config: Config)[source]¶
- Returns:
(epoch, model_filename). epoch is the epoch of the model filename.
- Return type:
(int|None, str|None)
- classmethod get_train_start_epoch(config: Config) int [source]¶
We will always automatically determine the best start (epoch,batch) tuple based on existing model files. This ensures that the files are present and enforces that there are no old outdated files which should be ignored. Note that epochs start at idx 1 and batches at idx 0.
- Parameters:
config
- Returns:
epoch
- classmethod epoch_model_filename(model_filename: str, epoch: int, *, is_pretrain: bool = False) str [source]¶
- Parameters:
model_filename
epoch
is_pretrain
- get_epoch_model_filename(epoch=None)[source]¶
- Parameters:
epoch (int|None)
- Returns:
filename, excluding TF specific postfix
- Return type:
str
- is_pretrain_epoch(epoch=None)[source]¶
- Parameters:
epoch (int|None)
- Returns:
whether this epoch is covered by the pretrain logic
- Return type:
bool
- is_first_epoch_after_pretrain()[source]¶
- Returns:
whether the current epoch is the first epoch right after pretraining
- Return type:
bool
- forward_with_callback(*, dataset: Dataset, callback: ForwardCallbackIface, dataset_init_epoch: bool = True)[source]¶
Iterate through the dataset, calling forward_step from user config, collecting outputs in rf.get_run_ctx() via mark_as_output calls, and then calling callback for each entry.
- Parameters:
dataset
callback – see
ForwardCallbackIface
dataset_init_epoch – whether the engine will call
dataset.init_seq_order
at the beginning, using the current epoch. If False, it assumes thatdataset.init_seq_order
was already called.