Welcome to RETURNN

RETURNN paper.

RETURNN - RWTH extensible training framework for universal recurrent neural networks, is a Theano/TensorFlow-based implementation of modern recurrent neural network architectures. It is optimized for fast and reliable training of recurrent neural networks in a multi-GPU environment.

Features include:

  • Mini-batch training of feed-forward neural networks
  • Sequence-chunking based batch training for recurrent neural networks
  • Long short-term memory recurrent neural networks including our own fast CUDA kernel
  • Multidimensional LSTM (GPU only, there is no CPU version)
  • Memory management for large data sets
  • Work distribution across multiple devices

See Basic usage and Technological overview.

There are many example demos which work on artificially generated data, i.e. they should work as-is.

There are some real-world examples such as setups for speech recognition on the Switchboard corpus.

Some benchmark setups against other frameworks can be found here. The results are in the RETURNN paper.

There is also a wiki.