Diagnostic functions for GPU information, failings, memory usage, etc.

returnn.torch.util.diagnose_gpu.print_available_devices(*, file: TextIO | None = None)[source]#

Print available devices, GPU (CUDA or other), etc.


file – where to print to. stdout by default

returnn.torch.util.diagnose_gpu.print_using_cuda_device_report(dev: str | device, *, file: TextIO | None = None)[source]#

Theano and TensorFlow print sth like: Using gpu device 2: GeForce GTX 980 (…) Print in a similar format so that some scripts which grep our stdout work just as before.

returnn.torch.util.diagnose_gpu.diagnose_no_gpu() List[str][source]#

Diagnose why we have no GPU. Print to stdout, but also prepare summary strings.


summary strings


Perform garbage collection, including any special logic for GPU.

Also see: