The below code is giving 'UnreadVariable' error in tensorflow

# importing libraries
import numpy as np
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.losses import MSE

# creating keras model
model = tf.keras.Sequential()
model.add(Dense(8, input_shape=(1,), activation='tanh'))
model.add(Dense(4, activation='relu'))
model.add(Dense(1))

# defining optimization parameters
opt=Adam(learning_rate = 1e-3)

# creating input and output variables for training
X = np.arange(1.,10.,1).reshape(-1,1)
Y = np.arange(2.,20.,2).reshape(-1,1)

# defining loss - only one iteration is performed
with tf.GradientTape() as tape:
    pred = model(X)
    loss = MSE(pred, Y)

# calculating gradients for loss w.r.t model parameters 
grads = tape.gradient(loss, model.trainable_variables)

# updating model parameters with above calculated gradients
opt.apply_gradients(zip(grads, model.trainable_weights))

I am getting the following error :

tf.Variable 'UnreadVariable' shape=() dtype=int64, numpy=1

I have tried : tf.compat.v1.disable_eager_execution() to disable eager tensor execution but I can't extract the values of any tensor post that. Also I don't know whether disabling eagerTensor actually solves the issue since I can't print the gradients or losses.


If you print your trainable weights before and after applying the gradients, you will clearly see that they are updated, even if it is only a small push to the right or left:

grads = tape.gradient(loss, model.trainable_variables)
tf.print(model.trainable_variables)
opt.apply_gradients(zip(grads, model.trainable_weights))
tf.print(model.trainable_variables)