Negative dimension size caused by subtracting 3 from 1 for 'Conv2D'
Your issue comes from the image_ordering_dim
in keras.json
.
From Keras Image Processing doc:
dim_ordering: One of {"th", "tf"}. "tf" mode means that the images should have shape (samples, height, width, channels), "th" mode means that the images should have shape (samples, channels, height, width). It defaults to the image_dim_ordering value found in your Keras config file at ~/.keras/keras.json. If you never set it, then it will be "tf".
Keras maps the convolution operation to the chosen backend (theano or tensorflow). However, both backends have made different choices for the ordering of the dimensions. If your image batch is of N images of HxW size with C channels, theano uses the NCHW ordering while tensorflow uses the NHWC ordering.
Keras allows you to choose which ordering you prefer and will do the conversion to map to the backends behind. But if you choose image_ordering_dim="th"
it expects Theano-style ordering (NCHW, the one you have in your code) and if image_ordering_dim="tf"
it expects tensorflow-style ordering (NHWC).
Since your image_ordering_dim
is set to "tf"
, if you reshape your data to the tensorflow style it should work:
X_train = X_train.reshape(X_train.shape[0], img_cols, img_rows, 1)
X_test = X_test.reshape(X_test.shape[0], img_cols, img_rows, 1)
and
input_shape=(img_cols, img_rows, 1)
FWIW, I got this error repeatedly with some values of strides or kernel_size but not all, with the backend and image_ordering already set as tensorflow's, and they all disappeared when I added padding="same"
Just add this:
from keras import backend as K
K.set_image_dim_ordering('th')
I am also having the same problem. However, each Conv3D layer, I am using, is reducing size of the input. So, including one parameter padding='same' during declaring the Conv2D/3D layer solved the problem. Here is the demo code
model.add(Conv3D(32,kernel_size=(3,3,3),activation='relu',padding='same'))
Reducing the size of the filter can also solve the problem.
Actually, Conv3D or Conv2D layer reduces the input data. But when your next layer does not recieve any input or input of size which is not appropriate for that layer, then this error occurs. By padding we are making the output of Conv3Dor2D remain same size of input so that next layer will get its desired input
I faced the same problem, but it was solved by changing the conv2d function:
if K.image_data_format=='channels_first':
x_train = x_train.reshape(x_train.shape[0], 1,img_cols,img_rows)
x_test = x_test.reshape(x_test.shape[0], 1,img_cols,img_rows)
input_shape = (1,img_cols,img_rows)
else:
x_train = x_train.reshape(x_train.shape[0],img_cols,img_rows,1)
x_test = x_test.reshape(x_test.shape[0],img_cols,img_rows,1)
input_shape = (img_cols,img_rows,1)
model.add(Convolution2D(32,(3, 3), input_shape = input_shape, activation="relu"))