What's the difference between tf.placeholder and tf.Variable?

Tensorflow

Tensorflow Problem Overview


I'm a newbie to TensorFlow. I'm confused about the difference between tf.placeholder and tf.Variable. In my view, tf.placeholder is used for input data, and tf.Variable is used to store the state of data. This is all what I know.

Could someone explain to me more in detail about their differences? In particular, when to use tf.Variable and when to use tf.placeholder?

Tensorflow Solutions


Solution 1 - Tensorflow

In short, you use tf.Variable for trainable variables such as weights (W) and biases (B) for your model.

weights = tf.Variable(
    tf.truncated_normal([IMAGE_PIXELS, hidden1_units],
                    stddev=1.0 / math.sqrt(float(IMAGE_PIXELS))), name='weights')

biases = tf.Variable(tf.zeros([hidden1_units]), name='biases')

tf.placeholder is used to feed actual training examples.

images_placeholder = tf.placeholder(tf.float32, shape=(batch_size, IMAGE_PIXELS))
labels_placeholder = tf.placeholder(tf.int32, shape=(batch_size))

This is how you feed the training examples during the training:

for step in xrange(FLAGS.max_steps):
    feed_dict = {
       images_placeholder: images_feed,
       labels_placeholder: labels_feed,
     }
    _, loss_value = sess.run([train_op, loss], feed_dict=feed_dict)

Your tf.variables will be trained (modified) as the result of this training.

See more at https://www.tensorflow.org/versions/r0.7/tutorials/mnist/tf/index.html. (Examples are taken from the web page.)

Solution 2 - Tensorflow

The difference is that with tf.Variable you have to provide an initial value when you declare it. With tf.placeholder you don't have to provide an initial value and you can specify it at run time with the feed_dict argument inside Session.run

Solution 3 - Tensorflow

Since Tensor computations compose of graphs then it's better to interpret the two in terms of graphs.

Take for example the simple linear regression

WX+B=Y

where W and B stand for the weights and bias and X for the observations' inputs and Y for the observations' outputs.

Obviously X and Y are of the same nature (manifest variables) which differ from that of W and B (latent variables). X and Y are values of the samples (observations) and hence need a place to be filled, while W and B are the weights and bias, Variables (the previous values affect the latter) in the graph which should be trained using different X and Y pairs. We place different samples to the Placeholders to train the Variables.

We only need to save or restore the Variables (at checkpoints) to save or rebuild the graph with the code.

Placeholders are mostly holders for the different datasets (for example training data or test data). However, Variables are trained in the training process for the specific tasks, i.e., to predict the outcome of the input or map the inputs to the desired labels. They remain the same until you retrain or fine-tune the model using different or the same samples to fill into the Placeholders often through the dict. For instance:

 session.run(a_graph, dict = {a_placeholder_name : sample_values}) 

Placeholders are also passed as parameters to set models.

If you change placeholders (add, delete, change the shape etc) of a model in the middle of training, you can still reload the checkpoint without any other modifications. But if the variables of a saved model are changed, you should adjust the checkpoint accordingly to reload it and continue the training (all variables defined in the graph should be available in the checkpoint).

To sum up, if the values are from the samples (observations you already have) you safely make a placeholder to hold them, while if you need a parameter to be trained harness a Variable (simply put, set the Variables for the values you want to get using TF automatically).

In some interesting models, like a style transfer model, the input pixes are going to be optimized and the normally-called model variables are fixed, then we should make the input (usually initialized randomly) as a variable as implemented in that link.

For more information please infer to this simple and illustrating doc.

Solution 4 - Tensorflow

TL;DR

Variables

  • For parameters to learn
  • Values can be derived from training
  • Initial values are required (often random)

Placeholders

  • Allocated storage for data (such as for image pixel data during a feed)
  • Initial values are not required (but can be set, see tf.placeholder_with_default)

Solution 5 - Tensorflow

The most obvious difference between the tf.Variable and the tf.placeholder is that


> you use variables to hold and update parameters. Variables are > in-memory buffers containing tensors. They must be explicitly > initialized and can be saved to disk during and after training. You > can later restore saved values to exercise or analyze the model.

Initialization of the variables is done with sess.run(tf.global_variables_initializer()). Also while creating a variable, you need to pass a Tensor as its initial value to the Variable() constructor and when you create a variable you always know its shape.


On the other hand, you can't update the placeholder. They also should not be initialized, but because they are a promise to have a tensor, you need to feed the value into them sess.run(<op>, {a: <some_val>}). And at last, in comparison to a variable, placeholder might not know the shape. You can either provide parts of the dimensions or provide nothing at all.


There other differences:

Interesting part is that not only placeholders can be fed. You can feed the value to a Variable and even to a constant.

Solution 6 - Tensorflow

Adding to other's answers, they also explain it very well in this MNIST tutorial on Tensoflow website:

> We describe these interacting operations by manipulating symbolic > variables. Let's create one:

> x = tf.placeholder(tf.float32, [None, 784]),

> x isn't a specific value. It's a placeholder, a value that we'll input when we ask TensorFlow to > run a computation. We want to be able to input any number of MNIST > images, each flattened into a 784-dimensional vector. We represent > this as a 2-D tensor of floating-point numbers, with a shape [None, > 784]. (Here None means that a dimension can be of any length.) > > We also need the weights and biases for our model. We could imagine > treating these like additional inputs, but TensorFlow has an even > better way to handle it: Variable. A Variable is a modifiable tensor > that lives in TensorFlow's graph of interacting operations. It can be > used and even modified by the computation. For machine learning > applications, one generally has the model parameters be Variables.

> W = tf.Variable(tf.zeros([784, 10]))

> b = tf.Variable(tf.zeros([10]))

> We create these Variables by giving tf.Variable the initial value of > the Variable: in this case, we initialize both W and b as tensors full > of zeros. Since we are going to learn W and b, it doesn't matter very > much what they initially are.

Solution 7 - Tensorflow

Tensorflow uses three types of containers to store/execute the process

  1. Constants :Constants holds the typical data.

  2. variables: Data values will be changed, with respective the functions such as cost_function..

  3. placeholders: Training/Testing data will be passed in to the graph.

Solution 8 - Tensorflow

Example snippet:

import numpy as np
import tensorflow as tf

### Model parameters ###
W = tf.Variable([.3], tf.float32)
b = tf.Variable([-.3], tf.float32)

### Model input and output ###
x = tf.placeholder(tf.float32)
linear_model = W * x + b
y = tf.placeholder(tf.float32)

### loss ###
loss = tf.reduce_sum(tf.square(linear_model - y)) # sum of the squares

### optimizer ###
optimizer = tf.train.GradientDescentOptimizer(0.01)
train = optimizer.minimize(loss)

### training data ###
x_train = [1,2,3,4]
y_train = [0,-1,-2,-3]

### training loop ###
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init) # reset values to wrong
for i in range(1000):
  sess.run(train, {x:x_train, y:y_train})

As the name say placeholder is a promise to provide a value later i.e.

Variable are simply the training parameters (W(matrix), b(bias) same as the normal variables you use in your day to day programming, which the trainer updates/modify on each run/step.

While placeholder doesn't require any initial value, that when you created x and y TF doesn't allocated any memory, instead later when you feed the placeholders in the sess.run() using feed_dict, TensorFlow will allocate the appropriately sized memory for them (x and y) - this unconstrained-ness allows us to feed any size and shape of data.


In nutshell:

Variable - is a parameter you want trainer (i.e. GradientDescentOptimizer) to update after each step.

Placeholder demo -

a = tf.placeholder(tf.float32)
b = tf.placeholder(tf.float32)
adder_node = a + b  # + provides a shortcut for tf.add(a, b)

Execution:

print(sess.run(adder_node, {a: 3, b:4.5}))
print(sess.run(adder_node, {a: [1,3], b: [2, 4]}))

resulting in the output

7.5
[ 3.  7.]

In the first case 3 and 4.5 will be passed to a and b respectively, and then to adder_node ouputting 7. In second case there's a feed list, first step 1 and 2 will be added, next 3 and 4 (a and b).


Relevant reads:

Solution 9 - Tensorflow

Variables

A TensorFlow variable is the best way to represent shared, persistent state manipulated by your program. Variables are manipulated via the tf.Variable class. Internally, a tf.Variable stores a persistent tensor. Specific operations allow you to read and modify the values of this tensor. These modifications are visible across multiple tf.Sessions, so multiple workers can see the same values for a tf.Variable. Variables must be initialized before using.

Example:

x = tf.Variable(3, name="x")
y = tf.Variable(4, name="y")
f = x*x*y + y + 2

This creates a computation graph. The variables (x and y) can be initialized and the function (f) evaluated in a tensorflow session as follows:

with tf.Session() as sess:
     x.initializer.run()
     y.initializer.run()
     result = f.eval()
print(result)
42

Placeholders

A placeholder is a node (same as a variable) whose value can be initialized in the future. These nodes basically output the value assigned to them during runtime. A placeholder node can be assigned using the tf.placeholder() class to which you can provide arguments such as type of the variable and/or its shape. Placeholders are extensively used for representing the training dataset in a machine learning model as the training dataset keeps changing.

Example:

A = tf.placeholder(tf.float32, shape=(None, 3))
B = A + 5

Note: 'None' for a dimension means 'any size'.

with tf.Session as sess:
    B_val_1 = B.eval(feed_dict={A: [[1, 2, 3]]})
    B_val_2 = B.eval(feed_dict={A: [[4, 5, 6], [7, 8, 9]]})

print(B_val_1)
[[6. 7. 8.]]
print(B_val_2)
[[9. 10. 11.]
 [12. 13. 14.]]

References:

  1. https://www.tensorflow.org/guide/variables
  2. https://www.tensorflow.org/api_docs/python/tf/placeholder
  3. O'Reilly: Hands-On Machine Learning with Scikit-Learn & Tensorflow

Solution 10 - Tensorflow

Think of Variable in tensorflow as a normal variables which we use in programming languages. We initialize variables, we can modify it later as well. Whereas placeholder doesn’t require initial value. Placeholder simply allocates block of memory for future use. Later, we can use feed_dict to feed the data into placeholder. By default, placeholder has an unconstrained shape, which allows you to feed tensors of different shapes in a session. You can make constrained shape by passing optional argument -shape, as I have done below.

x = tf.placeholder(tf.float32,(3,4))
y =  x + 2
 
sess = tf.Session()
print(sess.run(y)) # will cause an error
 
s = np.random.rand(3,4)
print(sess.run(y, feed_dict={x:s}))

While doing Machine Learning task, most of the time we are unaware of number of rows but (let’s assume) we do know the number of features or columns. In that case, we can use None.

x = tf.placeholder(tf.float32, shape=(None,4))

Now, at run time we can feed any matrix with 4 columns and any number of rows.

Also, Placeholders are used for input data ( they are kind of variables which we use to feed our model), where as Variables are parameters such as weights that we train over time.

Solution 11 - Tensorflow

Placeholder :

  1. A placeholder is simply a variable that we will assign data to at a later date. It allows us to create our operations and build our computation graph, without needing the data. In TensorFlow terminology, we then feed data into the graph through these placeholders.

  2. Initial values are not required but can have default values with tf.placeholder_with_default)

  3. We have to provide value at runtime like :

    a = tf.placeholder(tf.int16) // initialize placeholder value
    b = tf.placeholder(tf.int16) // initialize placeholder value
    
    use it using session like :
    
    sess.run(add, feed_dict={a: 2, b: 3}) // this value we have to assign at runtime
    

Variable :

  1. A TensorFlow variable is the best way to represent shared, persistent state manipulated by your program.
  2. Variables are manipulated via the tf.Variable class. A tf.Variable represents a tensor whose value can be changed by running ops on it.

Example : tf.Variable("Welcome to tensorflow!!!")

Solution 12 - Tensorflow

Tensorflow 2.0 Compatible Answer: The concept of Placeholders, tf.placeholder will not be available in Tensorflow 2.x (>= 2.0) by default, as the Default Execution Mode is Eager Execution.

However, we can use them if used in Graph Mode (Disable Eager Execution).

Equivalent command for TF Placeholder in version 2.x is tf.compat.v1.placeholder.

Equivalent Command for TF Variable in version 2.x is tf.Variable and if you want to migrate the code from 1.x to 2.x, the equivalent command is

tf.compat.v2.Variable.

Please refer this Tensorflow Page for more information about Tensorflow Version 2.0.

Please refer the Migration Guide for more information about migration from versions 1.x to 2.x.

Solution 13 - Tensorflow

Think of a computation graph. In such graph, we need an input node to pass our data to the graph, those nodes should be defined as Placeholder in tensorflow.

Do not think as a general program in Python. You can write a Python program and do all those stuff that guys explained in other answers just by Variables, but for computation graphs in tensorflow, to feed your data to the graph, you need to define those nods as Placeholders.

Solution 14 - Tensorflow

For TF V1:

  1. Constant is with initial value and it won't change in the computation;

  2. Variable is with initial value and it can change in the computation; (so good for parameters)

  3. Placeholder is without initial value and it won't change in the computation. (so good for inputs like prediction instances)

For TF V2, same but they try to hide Placeholder (graph mode is not preferred).

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionJ.DoeView Question on Stackoverflow
Solution 1 - TensorflowSung KimView Answer on Stackoverflow
Solution 2 - TensorflowfabrizioMView Answer on Stackoverflow
Solution 3 - TensorflowLerner ZhangView Answer on Stackoverflow
Solution 4 - TensorflowJamesView Answer on Stackoverflow
Solution 5 - TensorflowSalvador DaliView Answer on Stackoverflow
Solution 6 - TensorflowtagomaView Answer on Stackoverflow
Solution 7 - TensorflowKarnakar ReddyView Answer on Stackoverflow
Solution 8 - TensorflowNabeel AhmedView Answer on Stackoverflow
Solution 9 - TensorflowAnkita MishraView Answer on Stackoverflow
Solution 10 - TensorflowMuhammad UsmanView Answer on Stackoverflow
Solution 11 - TensorflowJitesh MohiteView Answer on Stackoverflow
Solution 12 - TensorflowTensorflow SupportView Answer on Stackoverflow
Solution 13 - TensorflowAli SalehiView Answer on Stackoverflow
Solution 14 - TensorflowZ.WeiView Answer on Stackoverflow