Is it a good practise to initialize weights before to train a Neural Network? (in order to reduce randomness)
I know that every time a neural network is trained, it is not possible to obtain exaclty the same result. So, I would like to reduce as much as possible the randomness. Is it a good practise to initialize weights before to train a Neural Network? There are some suggestions that I should follow when I write a code?
Thanks in advance for any suggestion.
Solution 1:
Weights are always initialized before training. There are multiple weight initialization methods. For example:
- Random initialization
- Zero initialization Using zero initialization would solve your problem. However, it is not used in practice, because zero initialization essentially turns the neural network into a linear model. You have to use random initialization.
You can, however, initialize weights randomly, using the same seed. Here is an example using Keras API:
init_1 = tf.keras.initializers.RandomNormal(seed=1)
layer_1 = layers.Dense(
units=64,
kernel_initializer=init_1
)
init_2 = tf.keras.initializers.RandomNormal(seed=2)
layer_2 = layers.Dense(
units=2,
kernel_initializer=init_2
)
model = tf.keras.Sequential([layer_1, layer_2])
This code creates a 2-layered DNN. Weights of each layer is initialized with a different seed (so that they will not be equal to each other), but they will be same (with themselves), if initialized again.
Have a look at this, this, and this.