Tensorflow batch loss spikes when restoring model for training from saved checkpoint?

I ran into this issue and found it was the fact that I was initializing the graph variables when restoring the graph -- throwing away all learned parameters, to be replaced with whatever initialization values were originally specified for each respective tensor in the original graph definition.

For example, if you used tf.global_variable_initializer() to initialize variables as part of your model program, whatever your control logic to indicate that a saved graph will be restored, make sure the graph restore flow omits: sess.run(tf.global_variable_initializer())

This was a simple, but costly mistake for me, so I hope someone else is saved a few grey hairs (or hairs, in general).