What is the difference between CuDNNLSTM and LSTM in Keras?

TensorflowKerasLstm

Tensorflow Problem Overview


In Keras, the high-level deep learning library, there are multiple types of recurrent layers; these include LSTM (Long short term memory) and CuDNNLSTM. According to the Keras documentation, a CuDNNLSTM is a:

> Fast LSTM implementation backed by CuDNN. Can only be run on GPU, with the TensorFlow backend.

It is my belief that Keras automatically uses the GPU wherever possible. According to the TensorFlow build instructions, to have a working TensorFlow GPU backend, you will need CuDNN:

> The following NVIDIA software must be installed on your system:

> - NVIDIA's Cuda Toolkit (>= 7.0). We recommend version 9.0. For details, see NVIDIA's documentation. Ensure that you append the relevant Cuda pathnames to the LD_LIBRARY_PATH environment variable as described in the NVIDIA documentation. > - The NVIDIA drivers associated with NVIDIA's Cuda Toolkit. > - cuDNN (>= v3). We recommend version 6.0. For details, see NVIDIA's documentation, particularly the description of appending the appropriate pathname to your LD_LIBRARY_PATH environment variable.

Therefore, how would a CuDNNLSTM differ in any way from a normal LSTM using a TensorFlow GPU backend? Will CuDNNLSTM be automatically selected and replace the normal LSTM when an available TensorFlow GPU backend is found?

Tensorflow Solutions


Solution 1 - Tensorflow

Why don't you try it out for yourself and see? In my case, training a model with LSTM took 10mins 30seconds. Simply switching the call from LSTM() to CuDNNLSTM() took less than a minute.

I also noticed that switching to CuDNNLSTM() speeds up model.evaluate() and model.predict() substantially as well.

Solution 2 - Tensorflow

In TensorFlow 2.0, the built-in LSTM and GRU layers have been updated to leverage CuDNN kernels by default when a GPU is available. With this change, the prior keras.layers.CuDNNLSTM/CuDNNGRU layers have been deprecated, and you can build your model without worrying about the hardware it will run on.

Since the CuDNN kernel is built with certain assumptions, this means the layer will not be able to use the CuDNN kernel if you change the defaults of the built-in LSTM or GRU layers.

Check the tensorflow RNN documentation : https://www.tensorflow.org/guide/keras/rnn

Solution 3 - Tensorflow

TL;DR; The difference is 15x speed up in model training time!

Setup Steps

Dependencies

Performance Benchmark: Comparison of the standard test machines.
1 iteration of Training on 612235 samples.

keras.layers.LSTM Intel i5-4690 CPU only: 612235/612235 [==============================] - 3755s 6ms/step - loss: 2.7339 - acc: 0.5067 - val_loss: 2.1149 - val_acc: 0.6175

GTX:950 & Intel i5-4690: 612235/612235 [==============================] - 1417s 2ms/step - loss: 2.7007 - acc: 0.5137 - val_loss: 2.0983 - val_acc: 0.6199

2.5x gain with GPU.

GTX:970 & Intel i5-4690: 612235/612235 [==============================] - 1322s 2ms/step - loss: 1.9214 - acc: 0.6442 - val_loss: 1.8808 - val_acc: 0.6461

Ignorable gain with powerful GPU.

RTX 2070 & Intel i7-9700K: 612235/612235 [==============================] - 1012s 2ms/step - loss: 2.7268 - acc: 0.5111 - val_loss: 2.1162 - val_acc: 0.6234

Very minimal gain even with awesome HW upgrades!!!

keras.layers.CuDNNLSTM RTX 2070 & Intel i7-9700K: 612235/612235 [==============================] - 69s 112us/step - loss: 1.9139 - acc: 0.6437 - val_loss: 1.8668 - val_acc: 0.6469

54x gain over CPU!
15x gain over traditional(non Cuda) LSTM implementation!

Solution 4 - Tensorflow

GPUs are good for massive parallel computation, most of the linear algebra ops can be parallelized to improve performance, Vector operations like matrix multiplication and gradient descent can be applied to large matrices that are executed in parallel with GPU support. CUDA - Compute Unified Device Architecture provides an interface that allows vector ops to take advantage of GPU parallelism. CuDNN implements kernels for large matrix operations on GPU using CUDA.

Here, CuDNNLSTM is designed for CUDA parallel processing and cannot run if there is no GPU. But LSTM is designed for normal CPUs. Faster time of execution is because of parallelism.

Solution 5 - Tensorflow

lbcommer's comment hits the nail on the head. Switching from an LSTM layer to cuDNNLSTM layer is much much faster, approx 10-20x so, but you lose some options, making it less versatile. Important options you lose include: masking, custom activation and dropout.

However, arguably, some of these properties can be introduced into the model with further layers/in other layers.

So if that matters, or you dont have a GPU, or deployment is a concern, stick to LSTM. Otherwise,cuDNNLSTM makes sense.

Also consider GRU for smaller datasets, as it is faster and more memory efficient. It only suffers accuracy issues as the dataset grows.

Also also... look at transformers which in keras are implemented through

tf.keras.layers.Attention()

These are also faster as all the inputs are ingested at once

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionkrismathView Question on Stackoverflow
Solution 1 - TensorflowcryanbhuView Answer on Stackoverflow
Solution 2 - TensorflowSilliansView Answer on Stackoverflow
Solution 3 - TensorflowShiwakant BhartiView Answer on Stackoverflow
Solution 4 - TensorflowNarasimha Prasanna HNView Answer on Stackoverflow
Solution 5 - TensorflowDataMonkeyView Answer on Stackoverflow