Validation Accuracy and training accuracy not improving after applying Transfer learning
The idea behind transfer learning is that you concatenate new trainable layers to the end of a pre-trained model, freeze the pre-trained layers, and train the new layers. When you add these new layers to the beginning of the pre-trained model and training the whole network, you are essentially overwriting the pre-trained coefficients.
It is possible to add preprocessing layers (or any layer that does not require back-propagation) to the beginning, but you have added a whole DNN.