How do I get the weights of a layer in Keras?
If you want to get weights and biases of all layers, you can simply use:
for layer in model.layers: print(layer.get_config(), layer.get_weights())
This will print all information that's relevant.
If you want the weights directly returned as numpy arrays, you can use:
first_layer_weights = model.layers[0].get_weights()[0]
first_layer_biases = model.layers[0].get_weights()[1]
second_layer_weights = model.layers[1].get_weights()[0]
second_layer_biases = model.layers[1].get_weights()[1]
etc.
If you write:
dense1 = Dense(10, activation='relu')(input_x)
Then dense1
is not a layer, it's the output of a layer. The layer is Dense(10, activation='relu')
So it seems you meant:
dense1 = Dense(10, activation='relu')
y = dense1(input_x)
Here is a full snippet:
import tensorflow as tf
from tensorflow.contrib.keras import layers
input_x = tf.placeholder(tf.float32, [None, 10], name='input_x')
dense1 = layers.Dense(10, activation='relu')
y = dense1(input_x)
weights = dense1.get_weights()
If you want to see how the weights and biases of your layer change over time, you can add a callback to record their values at each training epoch.
Using a model like this for example,
import numpy as np
model = Sequential([Dense(16, input_shape=(train_inp_s.shape[1:])), Dense(12), Dense(6), Dense(1)])
add the callbacks **kwarg during fitting:
gw = GetWeights()
model.fit(X, y, validation_split=0.15, epochs=10, batch_size=100, callbacks=[gw])
where the callback is defined by
class GetWeights(Callback):
# Keras callback which collects values of weights and biases at each epoch
def __init__(self):
super(GetWeights, self).__init__()
self.weight_dict = {}
def on_epoch_end(self, epoch, logs=None):
# this function runs at the end of each epoch
# loop over each layer and get weights and biases
for layer_i in range(len(self.model.layers)):
w = self.model.layers[layer_i].get_weights()[0]
b = self.model.layers[layer_i].get_weights()[1]
print('Layer %s has weights of shape %s and biases of shape %s' %(
layer_i, np.shape(w), np.shape(b)))
# save all weights and biases inside a dictionary
if epoch == 0:
# create array to hold weights and biases
self.weight_dict['w_'+str(layer_i+1)] = w
self.weight_dict['b_'+str(layer_i+1)] = b
else:
# append new weights to previously-created weights array
self.weight_dict['w_'+str(layer_i+1)] = np.dstack(
(self.weight_dict['w_'+str(layer_i+1)], w))
# append new weights to previously-created weights array
self.weight_dict['b_'+str(layer_i+1)] = np.dstack(
(self.weight_dict['b_'+str(layer_i+1)], b))
This callback will build a dictionary with all the layer weights and biases, labeled by the layer numbers, so you can see how they change over time as your model is being trained. You'll notice the shape of each weight and bias array depends on the shape of the model layer. One weights array and one bias array are saved for each layer in your model. The third axis (depth) shows their evolution over time.
Here we used 10 epochs and a model with layers of 16, 12, 6, and 1 neurons:
for key in gw.weight_dict:
print(str(key) + ' shape: %s' %str(np.shape(gw.weight_dict[key])))
w_1 shape: (5, 16, 10)
b_1 shape: (1, 16, 10)
w_2 shape: (16, 12, 10)
b_2 shape: (1, 12, 10)
w_3 shape: (12, 6, 10)
b_3 shape: (1, 6, 10)
w_4 shape: (6, 1, 10)
b_4 shape: (1, 1, 10)
you can also use layer name, if layers index number is confusing
weights:
model.get_layer(<<layer_name>>).get_weights()[0]
Biases:
model.get_layer(<<layer_name>>).get_weights()[1]