Holding variables constant during optimizer

I have a TensorFlow computational graph for a loss tensor L that depends on 2 tf.Variables, A and B.

I'd like to run gradient ascent on variable A (A+=gradient of L wrt A) while holding B fixed, and vice versa - running gradient ascent on B (B+=gradient of L wrt B) while holding A fixed. How do I do this?


tf.stop_gradient(tensor) might be what you are looking for. The tensor will be treated as constant for gradient computation purposes. You can create two losses with different parts treated as constants.

The other option (and often better) would be to create 2 optimizers but explicitly optimize only subsets of variables, e.g.

train_a = tf.train.GradientDescentOptimizer(0.1).minimize(loss_a, var_list=[A])
train_b = tf.train.GradientDescentOptimizer(0.1).minimize(loss_b, var_list=[B])

and you can iterate between them on the updates.