The TensorFlow library represents the computation to perform by linking operations into a computation graph. Once this computation graph is created, you can open a TensorFlow session and execute the computation graph to get the results. This procedure can be seen in the tensorflow_basic_op.py script, which performs a multiplication operation defined inside a computation graph as follows:
# path to the folder that we want to save the logs for Tensorboard
logs_path = "./logs"
# Define placeholders:
X_1 = tf.placeholder(tf.int16, name="X_1")
X_2 = tf.placeholder(tf.int16, name="X_2")
# Define a multiplication operation:
multiply = tf.multiply(X_1, X_2, name="my_multiplication")
The values for placeholders are provided when the graph is run in a session, as demonstrated in the following code snippet:
# Start the session and run the operation with different inputs:
with tf.Session() as sess:
summary_writer = tf.summary.FileWriter(logs_path, sess.graph)
print("2 x 3 = {}".format(sess.run(multiply, feed_dict={X_1: 2, X_2: 3})))
print("[2, 3] x [3, 4] = {}".format(sess.run(multiply, feed_dict={X_1: [2, 3], X_2: [3, 4]})))
As you can see, the computational graph is parametrized to access external inputs, known as placeholders. In the same session, we are performing two multiplications with different inputs. As the computational graph is a key point in TensorFlow, the graph visualization can help you to both understand and debug them using TensorBoard, which is a visualization software that comes with any standard TensorFlow installation. To visualize the computation graph with TensorBoard, you need to write log files of the program using tf.summary.FileWriter(), as shown previously. If you execute this script, the logs directory will be created in the same location where you executed this script. To run TensorBoard, you should execute the following code:
$ tensorboard --logdir="./logs"
This will generate a link (http://localhost:6006/) for you to enter in your browser, and you will see the TensorBoard page, which can be seen in the following screenshot:
You can see the computation graph of the previous script. Additionally, as TensorFlow graphs can have many thousands of nodes, scopes can be created to simplify the visualization, and TensorBoard uses this information to define a hierarchy on the nodes in the graph. This idea is shown in the tensorflow_basic_ops_scope.py script, where we define two operations (addition and multiplication) inside the Operations scope as follows:
with tf.name_scope('Operations'):
addition = tf.add(X_1, X_2, name="my_addition")
multiply = tf.multiply(X_1, X_2, name="my_multiplication")
If you execute the script and repeat the previous steps, the computational graph shown in TensorBoard can be seen in the following screenshot:
Note that you can also use constants (tf.Constant) and variables (tf.Variable) in your scripts. The difference between tf.Variable and tf.placeholder consists in the time when the values are passed. As you have seen in these previous examples, with tf.placeholder, you do not have to provide an initial value, and the values are specified at runtime with the feed_dict argument inside a session. On the other hand, if you use a tf.Variable variable, you have to provide an initial value when you declare it.
For the sake of simplification, we are not going to show the created computational graphs for the next script, but it is a recommended approach for visualizing the computational graphs using TensorBoard, because it will help you understand (and also validate) what computations are performed.