Iteratively build tensor in Tensorflow
In general, iterating over a dimension is very likely to be the wrong approach. In TF (and Matlab and Numpy), the goal is vectorization - describing your operations in a way that can touch all elements of the batch at the same time.
For example, let's say my dataset is composed of length 2 vectors, and I have a batch of 4 of them.
data = tf.convert_to_tensor([[1,2], [3,4], [5,6], [7,8]], tf.float32)
>>> data
<tf.Tensor: shape=(4, 2), dtype=float32, numpy=
array([[1., 2.],
[3., 4.],
[5., 6.],
[7., 8.]], dtype=float32)>
If you wanted to add an element to each vector in a vectorized way, adding some kind of statistical analysis such as variance, you'd do this. Notice how you are constantly thinking about tensors shapes and dimensions and how to concat/append tensors. It's common to document tensor shapes constantly and even assert them. Welcome to TF programming.
vars = tf.math.reduce_variance(data, axis=1, keepdims=True)
tf.debugging.assert_equal(tf.shape(vars), [4, 1])
tf.concat(values=[data, vars], axis=1)
<tf.Tensor: shape=(4, 3), dtype=float32, numpy=
array([[1. , 2. , 0.25],
[3. , 4. , 0.25],
[5. , 6. , 0.25],
[7. , 8. , 0.25]], dtype=float32)>