Tensorflow model prediction failes when ran right after model training

I'm having troubles with my model prediction. The training works fine but afterwards my program fails while predicting the trained model. When I rerun my code the training is now skipped because its already done, the prediction works now fine as its supposed to. In google I find this error only with regard to model training so i guess the solutions don't work for me. I think the reason for my error is, that my video ram is not entirely freed after model training. That's why I tried the following without success.

tf.keras.backend.clear_session()
tf.compat.v1.reset_default_graph()
K.clear_session()

Error code:

prediction = model.predict(x)[:, 0]#.flatten()  # flatten was needed now
  File "/home/max/PycharmProjects/Masterthesis/venv3-8-12/lib/python3.8/site-packages/keras/utils/traceback_utils.py", line 67, in error_handler
    raise e.with_traceback(filtered_tb) from None
  File "/home/max/PycharmProjects/Masterthesis/venv3-8-12/lib/python3.8/site-packages/tensorflow/python/framework/constant_op.py", line 106, in convert_to_eager_tensor
    return ops.EagerTensor(value, ctx.device_name, dtype)
tensorflow.python.framework.errors_impl.InternalError: Failed copying input tensor from /job:localhost/replica:0/task:0/device:CPU:0 to /job:localhost/replica:0/task:0/device:GPU:0 in order to run _EagerConst: Dst tensor is not initialized.

Do you have any ideas on how to solve the problem?

My Setup:

  • Python: 3.8.12
  • Tensorflow-gpu: 2.7.0
  • System: Manjaro Linux
  • Cuda: 11.5
  • GPU: NVIDIA GeForce GTX 980 Ti

My Code:

from tensorflow.keras.models import Model, load_model
from tensorflow.keras.layers import Input, LSTM, Dense, Dropout
import tensorflow as tf
import h5py
import keras.backend as K


def loss_function(y_true, y_pred):
    alpha = K.std(y_pred) / K.std(y_true)
    beta = K.sum(y_pred) / K.sum(y_true)
    error = K.sqrt( + K.square(1 - alpha) + K.square(1 - beta))

    return error


i = Input(shape=(171, 11))
x = LSTM(100, return_sequences=True)(i)
x = LSTM(50)(x)
x = Dropout(0.1)(x)
out = Dense(1)(x)

model = Model(i, out)
model.compile(
    loss=loss_function,
    optimizer=tf.keras.optimizers.Adam(learning_rate=0.001))

with h5py.File("db.hdf5", 'r') as db_:
    r = model.fit(
        db_["X_train"][...],
        db_["Y_train"][...],
        epochs=1,
        batch_size=64,
        verbose=1,
        shuffle=True)
model.save("model.h5")

model = load_model("model.h5", compile=False)

with h5py.File("db.hdf5", 'r') as db:
    x = db["X_val"][...]
    y = db["Y_val"][...].flatten()
    prediction = model.predict(x)[:, 0].flatten()

I found the solution to my problem. Since I'm using a custom loss function, I somehow needed to specify the custom loss function when loading the model again. I accomplished this by modifying this line

model = load_model("model.h5", compile=False)

to this one

model = load_model("model.h5", custom_objects={"loss_function": loss_function})