How to print the value of a Tensor object in TensorFlow?

I have been using the introductory example of matrix multiplication in TensorFlow.

matrix1 = tf.constant([[3., 3.]])
matrix2 = tf.constant([[2.],[2.]])
product = tf.matmul(matrix1, matrix2)

When I print the product, it is displaying it as a Tensor object:

<tensorflow.python.framework.ops.Tensor object at 0x10470fcd0>

But how do I know the value of product?

The following doesn't help:

print product
Tensor("MatMul:0", shape=TensorShape([Dimension(1), Dimension(1)]), dtype=float32)

I know that graphs run on Sessions, but isn't there any way I can check the output of a Tensor object without running the graph in a session?


The easiest[A] way to evaluate the actual value of a Tensor object is to pass it to the Session.run() method, or call Tensor.eval() when you have a default session (i.e. in a with tf.Session(): block, or see below). In general[B], you cannot print the value of a tensor without running some code in a session.

If you are experimenting with the programming model, and want an easy way to evaluate tensors, the tf.InteractiveSession lets you open a session at the start of your program, and then use that session for all Tensor.eval() (and Operation.run()) calls. This can be easier in an interactive setting, such as the shell or an IPython notebook, when it's tedious to pass around a Session object everywhere. For example, the following works in a Jupyter notebook:

with tf.Session() as sess:  print(product.eval()) 

This might seem silly for such a small expression, but one of the key ideas in Tensorflow 1.x is deferred execution: it's very cheap to build a large and complex expression, and when you want to evaluate it, the back-end (to which you connect with a Session) is able to schedule its execution more efficiently (e.g. executing independent parts in parallel and using GPUs).


[A]: To print the value of a tensor without returning it to your Python program, you can use the tf.print() operator, as Andrzej suggests in another answer. According to the official documentation:

To make sure the operator runs, users need to pass the produced op to tf.compat.v1.Session's run method, or to use the op as a control dependency for executed ops by specifying with tf.compat.v1.control_dependencies([print_op]), which is printed to standard output.

Also note that:

In Jupyter notebooks and colabs, tf.print prints to the notebook cell outputs. It will not write to the notebook kernel's console logs.

[B]: You might be able to use the tf.get_static_value() function to get the constant value of the given tensor if its value is efficiently calculable.


While other answers are correct that you cannot print the value until you evaluate the graph, they do not talk about one easy way of actually printing a value inside the graph, once you evaluate it.

The easiest way to see a value of a tensor whenever the graph is evaluated (using run or eval) is to use the Print operation as in this example:

# Initialize session
import tensorflow as tf
sess = tf.InteractiveSession()

# Some tensor we want to print the value of
a = tf.constant([1.0, 3.0])

# Add print operation
a = tf.Print(a, [a], message="This is a: ")

# Add more elements of the graph using a
b = tf.add(a, a)

Now, whenever we evaluate the whole graph, e.g. using b.eval(), we get:

I tensorflow/core/kernels/logging_ops.cc:79] This is a: [1 3]

Reiterating what others said, its not possible to check the values without running the graph.

A simple snippet for anyone looking for an easy example to print values is as below. The code can be executed without any modification in ipython notebook

import tensorflow as tf

#define a variable to hold normal random values 
normal_rv = tf.Variable( tf.truncated_normal([2,3],stddev = 0.1))

#initialize the variable
init_op = tf.initialize_all_variables()

#run the graph
with tf.Session() as sess:
    sess.run(init_op) #execute init_op
    #print the random values that we sample
    print (sess.run(normal_rv))

Output:

[[-0.16702934  0.07173464 -0.04512421]
 [-0.02265321  0.06509651 -0.01419079]]

No, you can not see the content of the tensor without running the graph (doing session.run()). The only things you can see are:

  • the dimensionality of the tensor (but I assume it is not hard to calculate it for the list of the operations that TF has)
  • type of the operation that will be used to generate the tensor (transpose_1:0, random_uniform:0)
  • type of elements in the tensor (float32)

I have not found this in documentation, but I believe that the values of the variables (and some of the constants are not calculated at the time of assignment).


Take a look at this example:

import tensorflow as tf
from datetime import datetime
dim = 7000

The first example where I just initiate a constant Tensor of random numbers run approximately the same time irrespectibly of dim (0:00:00.003261)

startTime = datetime.now()
m1 = tf.truncated_normal([dim, dim], mean=0.0, stddev=0.02, dtype=tf.float32, seed=1)
print datetime.now() - startTime

In the second case, where the constant is actually gets evaluated and the values are assigned, the time clearly depends on dim (0:00:01.244642)

startTime = datetime.now()
m1 = tf.truncated_normal([dim, dim], mean=0.0, stddev=0.02, dtype=tf.float32, seed=1)
sess = tf.Session()
sess.run(m1)
print datetime.now() - startTime

And you can make it more clear by calculating something (d = tf.matrix_determinant(m1), keeping in mind that the time will run in O(dim^2.8))

P.S. I found were it is explained in documentation:

A Tensor object is a symbolic handle to the result of an operation, but does not actually hold the values of the operation's output.