Welcome to RETURNN¶
RETURNN - RWTH extensible training framework for universal recurrent neural networks, is a Theano/TensorFlow-based implementation of modern recurrent neural network architectures. It is optimized for fast and reliable training of recurrent neural networks in a multi-GPU environment.
The high-level features and goals of RETURNN are:
- Writing config / code is simple & straight-forward (setting up experiment, defining model)
- Debugging in case of problems is simple
- Reading config / code is simple (defined model, training, decoding all becomes clear)
- Allow for many different kinds of experiments / models
- Training speed
- Decoding speed
All items are important for research, decoding speed is esp. important for production.
More specific features include:
- Mini-batch training of feed-forward neural networks
- Sequence-chunking based batch training for recurrent neural networks
- Long short-term memory recurrent neural networks including our own fast CUDA kernel
- Multidimensional LSTM (GPU only, there is no CPU version)
- Memory management for large data sets
- Work distribution across multiple devices
- Flexible and fast architecture which allows all kinds of encoder-attention-decoder models
Here is the video recording of a RETURNN overview talk (slides, exercise sheet; hosted by eBay).
There are many example demos which work on artificially generated data, i.e. they should work as-is.
There are some real-world examples such as setups for speech recognition on the Switchboard or LibriSpeech corpus.
Some benchmark setups against other frameworks can be found here. The results are in the RETURNN paper 2016. Performance benchmarks of our LSTM kernel vs CuDNN and other TensorFlow kernels are in TensorFlow LSTM Benchmark.
Some recent development changelog can be seen here.