What's the difference of name scope and a variable scope in tensorflow?

Tensorflow

Tensorflow Problem Overview


What's the differences between these functions?

> tf.variable_op_scope(values, name, default_name, initializer=None)

> Returns a context manager for defining an op that creates variables. This context manager validates that the given values are from the same graph, ensures that that graph is the default graph, and pushes a name scope and a variable scope.


> tf.op_scope(values, name, default_name=None)

> Returns a context manager for use when defining a Python op. This context manager validates that the given values are from the same graph, ensures that that graph is the default graph, and pushes a name scope.


> tf.name_scope(name)

> Wrapper for Graph.name_scope() using the default graph. See Graph.name_scope() for more details.


> tf.variable_scope(name_or_scope, reuse=None, initializer=None)

> Returns a context for variable scope. Variable scope allows to create new variables and to share already created ones while providing checks to not create or share by accident. For details, see the Variable Scope How To, here we present only a few basic examples.

Tensorflow Solutions


Solution 1 - Tensorflow

Let's begin by a short introduction to variable sharing. It is a mechanism in TensorFlow that allows for sharing variables accessed in different parts of the code without passing references to the variable around.

The method tf.get_variable can be used with the name of the variable as the argument to either create a new variable with such name or retrieve the one that was created before. This is different from using the tf.Variable constructor which will create a new variable every time it is called (and potentially add a suffix to the variable name if a variable with such name already exists).

It is for the purpose of the variable sharing mechanism that a separate type of scope (variable scope) was introduced.

As a result, we end up having two different types of scopes:

Both scopes have the same effect on all operations as well as variables created using tf.Variable, i.e., the scope will be added as a prefix to the operation or variable name.

However, name scope is ignored by tf.get_variable. We can see that in the following example:

with tf.name_scope("my_scope"):
    v1 = tf.get_variable("var1", [1], dtype=tf.float32)
    v2 = tf.Variable(1, name="var2", dtype=tf.float32)
    a = tf.add(v1, v2)

print(v1.name)  # var1:0
print(v2.name)  # my_scope/var2:0
print(a.name)   # my_scope/Add:0

The only way to place a variable accessed using tf.get_variable in a scope is to use a variable scope, as in the following example:

with tf.variable_scope("my_scope"):
    v1 = tf.get_variable("var1", [1], dtype=tf.float32)
    v2 = tf.Variable(1, name="var2", dtype=tf.float32)
    a = tf.add(v1, v2)

print(v1.name)  # my_scope/var1:0
print(v2.name)  # my_scope/var2:0
print(a.name)   # my_scope/Add:0

This allows us to easily share variables across different parts of the program, even within different name scopes:

with tf.name_scope("foo"):
    with tf.variable_scope("var_scope"):
        v = tf.get_variable("var", [1])
with tf.name_scope("bar"):
    with tf.variable_scope("var_scope", reuse=True):
        v1 = tf.get_variable("var", [1])
assert v1 == v
print(v.name)   # var_scope/var:0
print(v1.name)  # var_scope/var:0

UPDATE

As of version r0.11, op_scope and variable_op_scope are both deprecated and replaced by name_scope and variable_scope.

Solution 2 - Tensorflow

Both variable_op_scope and op_scope are now deprecated and should not be used at all.

Regarding the other two, I also had problems understanding the difference between variable_scope and name_scope (they looked almost the same) before I tried to visualize everything by creating a simple example:

import tensorflow as tf


def scoping(fn, scope1, scope2, vals):
    with fn(scope1):
        a = tf.Variable(vals[0], name='a')
        b = tf.get_variable('b', initializer=vals[1])
        c = tf.constant(vals[2], name='c')

        with fn(scope2):
            d = tf.add(a * b, c, name='res')
            
        print '\n  '.join([scope1, a.name, b.name, c.name, d.name]), '\n'
    return d

d1 = scoping(tf.variable_scope, 'scope_vars', 'res', [1, 2, 3])
d2 = scoping(tf.name_scope,     'scope_name', 'res', [1, 2, 3])

with tf.Session() as sess:
    writer = tf.summary.FileWriter('logs', sess.graph)
    sess.run(tf.global_variables_initializer())
    print sess.run([d1, d2])
    writer.close()

Here I create a function that creates some variables and constants and groups them in scopes (depending on the type I provided). In this function, I also print the names of all the variables. After that, I executes the graph to get values of the resulting values and save event-files to investigate them in TensorBoard. If you run this, you will get the following:

scope_vars
  scope_vars/a:0
  scope_vars/b:0
  scope_vars/c:0
  scope_vars/res/res:0 

scope_name
  scope_name/a:0
  b:0
  scope_name/c:0
  scope_name/res/res:0 

You see the similar pattern if you open TensorBoard (as you see b is outside of scope_name rectangular):

https://i.stack.imgur.com/MN3S3.png" width="550">


This gives you the answer:

Now you see that tf.variable_scope() adds a prefix to the names of all variables (no matter how you create them), ops, constants. On the other hand tf.name_scope() ignores variables created with tf.get_variable() because it assumes that you know which variable and in which scope you wanted to use.

A good documentation on Sharing variables tells you that

> tf.variable_scope(): Manages namespaces for names passed to tf.get_variable().

The same documentation provides a more details how does Variable Scope work and when it is useful.

Solution 3 - Tensorflow

Namespaces is a way to organize names for variables and operators in hierarchical manner (e.g. "scopeA/scopeB/scopeC/op1")

  • tf.name_scope creates namespace for operators in the default graph.

  • tf.variable_scope creates namespace for both variables and operators in the default graph.

  • tf.op_scope same as tf.name_scope, but for the graph in which specified variables were created.

  • tf.variable_op_scope same as tf.variable_scope, but for the graph in which specified variables were created.

Links to the sources above help to disambiguate this documentation issue.

This example shows that all types of scopes define namespaces for both variables and operators with following differences:

  1. scopes defined by tf.variable_op_scope or tf.variable_scope are compatible with tf.get_variable (it ignores two other scopes)
  2. tf.op_scope and tf.variable_op_scope just select a graph from a list of specified variables to create a scope for. Other than than their behavior equal to tf.name_scope and tf.variable_scope accordingly
  3. tf.variable_scope and variable_op_scope add specified or default initializer.

Solution 4 - Tensorflow

Let's make it simple: just use tf.variable_scope. Quoting a TF developer,:

> Currently, we recommend everyone to use variable_scope and not use name_scope except for internal code and libraries.

Besides the fact that variable_scope's functionality basically extends those of name_scope, together they behave in a way that may surprises you:

with tf.name_scope('foo'):
  with tf.variable_scope('bar'):
    x = tf.get_variable('x', shape=())
    x2 = tf.square(x**2, name='x2')
print(x.name)
# bar/x:0
print(x2.name)
# foo/bar/x2:0

This behavior has its use and justifies the coexistance of both scopes -- but unless you know what you are doing, sticking to variable_scope only will avoid you some headaches due to this.

Solution 5 - Tensorflow

As for API r0.11, op_scope and variable_op_scope are both deprecated. name_scope and variable_scope can be nested:

with tf.name_scope('ns'):
    with tf.variable_scope('vs'): #scope creation
        v1 = tf.get_variable("v1",[1.0])   #v1.name = 'vs/v1:0'
        v2 = tf.Variable([2.0],name = 'v2')  #v2.name= 'ns/vs/v2:0'
        v3 = v1 + v2       #v3.name = 'ns/vs/add:0'

Solution 6 - Tensorflow

You can think them as two groups: variable_op_scope and op_scope take a set of variables as input and are designed to create operations. The difference is in how they affect the creation of variables with tf.get_variable:

def mysum(a,b,name=None):
	with tf.op_scope([a,b],name,"mysum") as scope:
		v = tf.get_variable("v", 1)
		v2 = tf.Variable([0], name="v2")
		assert v.name == "v:0", v.name
		assert v2.name == "mysum/v2:0", v2.name
		return tf.add(a,b)

def mysum2(a,b,name=None):
	with tf.variable_op_scope([a,b],name,"mysum2") as scope:
		v = tf.get_variable("v", 1)
		v2 = tf.Variable([0], name="v2")
		assert v.name == "mysum2/v:0", v.name
		assert v2.name == "mysum2/v2:0", v2.name
		return tf.add(a,b)

with tf.Graph().as_default():
	op = mysum(tf.Variable(1), tf.Variable(2))
	op2 = mysum2(tf.Variable(1), tf.Variable(2))
	assert op.name == 'mysum/Add:0', op.name
	assert op2.name == 'mysum2/Add:0', op2.name

notice the name of the variable v in the two examples.

same for tf.name_scope and tf.variable_scope:

with tf.Graph().as_default():
	with tf.name_scope("name_scope") as scope:
		v = tf.get_variable("v", [1])
		op = tf.add(v, v)
		v2 = tf.Variable([0], name="v2")
		assert v.name == "v:0", v.name
		assert op.name == "name_scope/Add:0", op.name
		assert v2.name == "name_scope/v2:0", v2.name

with tf.Graph().as_default():
	with tf.variable_scope("name_scope") as scope:
		v = tf.get_variable("v", [1])
		op = tf.add(v, v)
		v2 = tf.Variable([0], name="v2")
		assert v.name == "name_scope/v:0", v.name
		assert op.name == "name_scope/Add:0", op.name
		assert v2.name == "name_scope/v2:0", v2.name

You can read more about variable scope in the tutorial. A similar question was asked before on Stack Overflow.

Solution 7 - Tensorflow

From the last section of this page of the tensorflow documentation: [Names of ops in tf.variable_scope()][1]

> [...] when we do with tf.variable_scope("name"), this implicitly opens a tf.name_scope("name"). For example:

with tf.variable_scope("foo"):
  x = 1.0 + tf.get_variable("v", [1])
assert x.op.name == "foo/add"

>Name scopes can be opened in addition to a variable scope, and then they will only affect the names of the ops, but not of variables.

with tf.variable_scope("foo"):
    with tf.name_scope("bar"):
        v = tf.get_variable("v", [1])
        x = 1.0 + v
assert v.name == "foo/v:0"
assert x.op.name == "foo/bar/add"

>When opening a variable scope using a captured object instead of a string, we do not alter the current name scope for ops. [1]: https://www.tensorflow.org/programmers_guide/variable_scope

Solution 8 - Tensorflow

Tensorflow 2.0 Compatible Answer: The explanations of Andrzej Pronobis and Salvador Dali are very detailed about the Functions related to Scope.

Of the Scope Functions discussed above, which are active as of now (17th Feb 2020) are variable_scope and name_scope.

Specifying the 2.0 Compatible Calls for those functions, we discussed above, for the benefit of the community.

Function in 1.x:

tf.variable_scope

tf.name_scope

Respective Function in 2.x:

tf.compat.v1.variable_scope

tf.name_scope (tf.compat.v2.name_scope if migrated from 1.x to 2.x)

For more information about migration from 1.x to 2.x, please refer this Migration Guide.

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionXiuyi YangView Question on Stackoverflow
Solution 1 - TensorflowAndrzej PronobisView Answer on Stackoverflow
Solution 2 - TensorflowSalvador DaliView Answer on Stackoverflow
Solution 3 - TensorflowAlexander GorbanView Answer on Stackoverflow
Solution 4 - TensorflowP-GnView Answer on Stackoverflow
Solution 5 - TensorflowsguView Answer on Stackoverflow
Solution 6 - TensorflowfabrizioMView Answer on Stackoverflow
Solution 7 - TensorflowGuillermo González de GaribayView Answer on Stackoverflow
Solution 8 - TensorflowTensorflow SupportView Answer on Stackoverflow