Remove nodes from graph or reset entire default graph

When working with the default global graph, is it possible to remove nodes after they've been added, or alternatively to reset the default graph to empty? When working with TF interactively in IPython, I find myself having to restart the kernel repeatedly. I would like to be able to experiment with graphs more easily if possible.


Update 11/2/2016

tf.reset_default_graph()

Old stuff

There's reset_default_graph, but not part of public API (I think it should be, does someone wants to file an issue on GitHub?)

My work-around to reset things is this:

from tensorflow.python.framework import ops
ops.reset_default_graph()
sess = tf.InteractiveSession()

By default, a session is constructed around the default graph. To avoid leaving dead nodes in the session, you need to either control the default graph or use an explicit graph.

  • To clear the default graph, you can use the tf.reset_default_graph function.

    tf.reset_default_graph()
    sess = tf.InteractiveSession()
    
  • You can also construct explicitly a graph and avoid using the default one. If you use a normal Session, you will need to fully create the graph before constructing the session. For InteractiveSession, you can just declare the graph and use it as a context to declare further changes:

    g = tf.Graph()
    sess = tf.InteractiveSession(graph=g)
    with g.asdefault():
        # Put variable declaration and other tf operation
        # in the graph context
        ....
        b = tf.matmul(A, x)
        ....
    
     sess.run([b], ...)
    

EDIT: For recent versions of tensorflow (1.0+), the correct function is g.as_default.


IPython / Jupyter notebook cells keep state between runs of a cell.

Create a custom graph:

def main():
    # Define your model
    data = tf.placeholder(...)
    model = ...

with tf.Graph().as_default():
    main()

Once ran, the graph is cleaned up.


Not sure if I faced the very same problem, but

tf.keras.backend.clear_session()

at the beginning of the cell in which the model (Keras, in my case) was constructed and trained helped to "cut the clutter" so only the current graph remains in the TensorBoard visualization after repeated runs of the same cell.

Environment: TensorFlow 2.0 (tensorflow-gpu==2.0.0b1) in Colab with built-in TensorBoard (using the %load_ext tensorboard trick).