tensorflow add 'None' dimension to a tensor
If you have something like this:
import tensorflow as tf
tf.random.set_seed(123)
xl = tf.keras.layers.Input((221,))
embed_dim = xl.shape[-1]
w=tf.Variable(tf.random.truncated_normal(shape=(embed_dim,), stddev=0.01)) #(221)
x1_transpose = tf.reshape(xl, [-1, 1, embed_dim])
x_lw = tf.tensordot(x1_transpose, w, axes=1)
model = tf.keras.Model(xl, x_lw)
example = tf.random.normal((2, 221))
print(model(example))
tf.Tensor(
[[-0.0661035 ]
[ 0.15439653]], shape=(2, 1), dtype=float32)
Then the equivalent of that using tf.linalg.matmul
would be something like this:
import tensorflow as tf
tf.random.set_seed(123)
xl = tf.keras.layers.Input((221,))
embed_dim = xl.shape[-1]
w=tf.Variable(tf.random.truncated_normal(shape=(embed_dim,), stddev=0.01)) #(221)
xl_expanded = tf.expand_dims(xl, axis=1)
w = tf.expand_dims(w, axis=1)
x_lw = tf.squeeze(tf.linalg.matmul(xl_expanded, w, transpose_a=False, transpose_b=False), axis=1)
model = tf.keras.Model(xl, x_lw)
example = tf.random.normal((2, 221))
print(model(example)
tf.Tensor(
[[-0.0661035]
[ 0.1543966]], shape=(2, 1), dtype=float32)
Interestingly, there seems to be a small rounding difference between the two methods. Using xl_expanded @ w
also yields the same results as tf.linalg.matmul
. In general, you should be able to use either method for your use case:
a = tf.constant([1, 2, 3, 4, 5, 6], shape=[2, 3], dtype=tf.float32)
b = tf.constant([7, 8, 9, 10, 11, 12], shape=[3, 2], dtype=tf.float32)
option1 = tf.tensordot(a, b, axes=1)
option2 = tf.linalg.matmul(a, b)
print(option1)
print(option2)
tf.Tensor(
[[ 58. 64.]
[139. 154.]], shape=(2, 2), dtype=float32)
tf.Tensor(
[[ 58. 64.]
[139. 154.]], shape=(2, 2), dtype=float32)