Tensorflow dense gradient explanation?

Tensorflow

Tensorflow Problem Overview


I recently implemented a model and when I ran it I received this warning:

UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. 
This may consume a large amount of memory.
"Converting sparse IndexedSlices to a dense Tensor of unknown shape. "

With some similar parameter settings (embedding dimensionalities) suddenly the model is ridiculously slow.

  1. What does this warning imply? It appears that something I've done has caused all of the gradients to be dense and so backprop is doing dense matrix computations
  2. If it's that there is an issue with the model that's causing this, how can I identify it and fix it?

Tensorflow Solutions


Solution 1 - Tensorflow

This warning is printed when a sparse tf.IndexedSlices object is implicitly converted to a dense tf.Tensor. This typically happens when one op (usually tf.gather()) backpropagates a sparse gradient, but the op that receives it does not have a specialized gradient function that can handle sparse gradients. As a result, TensorFlow automatically densifies the tf.IndexedSlices, which can have a devastating effect on performance if the tensor is large.

To fix this problem, you should try to ensure that the params input to tf.gather() (or the params inputs to tf.nn.embedding_lookup()) is a tf.Variable. Variables can receive the sparse updates directly, so no conversion is needed. Although tf.gather() (and tf.nn.embedding_lookup()) accept arbitrary tensors as inputs, this may lead to a more complicated backpropagation graph, resulting in implicit conversion.

Solution 2 - Tensorflow

A dense Tensor can be thought of like a standard python array. A sparse one can be thought of as a collection of indices and values e.g.

# dense
array = ['a', None, None, 'c']

# sparse
array = [(0, 'a'), (3, 'c')]

So as you can see if you have a lot of empty entries a sparse array will be much more efficient than a dense one. But if all entries are filled in, dense is far more efficient. In your case somewhere in the tensor flow graph a sparse array is being converted to a dense one of indeterminate size. The warning is just saying it is possible that you can waste a lot of memory like this. But it might not be a problem at all if the sparse array is not too big/already quite dense.

If you want to diagnose it I would advise naming your various tensor objects then it will print exactly which ones are being used in this conversion and you can work out what you might be able to adjust to remove it.

Solution 3 - Tensorflow

Totally agree with the answer of mrry.

Actually I will post another solution for this problem.

You could use tf.dynamic_partition() instead of tf.gather() to eliminate the warning.

The example code is below:

# Create the cells for the RNN network
lstm = tf.nn.rnn_cell.BasicLSTMCell(128)

# Get the output and state from dynamic rnn
output, state = tf.nn.dynamic_rnn(lstm, sequence, dtype=tf.float32, sequence_length = seqlen)

# Convert output to a tessor and reshape it
outputs = tf.reshape(tf.pack(output), [-1, lstm.output_size])

# Set partions to 2
num_partitions = 2

# The partitions argument is a tensor which is already fed to a placeholder.
# It is a 1-D tensor with the length of batch_size * max_sequence_length.
# In this partitions tensor, you need to set the last output idx for each seq to 1 and 
# others remain 0, so that the result could be separated to two parts,
# one is the last outputs and the other one is the non-last outputs.
res_out = tf.dynamic_partition(outputs, partitions, num_partitions)

# prediction
preds = tf.matmul(res_out[1], weights) + bias

Hope this could help you.

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionTaaamView Question on Stackoverflow
Solution 1 - TensorflowmrryView Answer on Stackoverflow
Solution 2 - TensorflowDaniel SlaterView Answer on Stackoverflow
Solution 3 - TensorflowAI_ROBOTView Answer on Stackoverflow