OpenCV getMemoryShapes error when executing a forward pass of the object detection model

I am using this pre-trained TensorFlow model. I convert it to an onnx file, then load it into OpenCV.

I am pretty sure it has something to do with the channels not lining up properly.

Here is the shape of the model:

'shape': {'dim': [{'dimParam': 'unk__879'}, {'dimValue': '224'}, {'dimValue': '224'}, {'dimValue': '3'}]}

I have tried reading the image using: cv2.IMREAD_GRAYSCALE but this did not work.



image = cv.imread('input/image_2.jpg')
resized = cv.resize(image, (224, 224))

blob = cv.dnn.blobFromImage(resized, 1, (224,224),True)
print("First Blob: {}".format(blob.shape))

model.setInput(blob)
output = model.forward()

The error:

error                                     Traceback (most recent call last)
<ipython-input-158-dc5926754ea3> in <module>
     18 model.setInput(blob)
     19 # forward pass through the model to carry out the detection
---> 20 output = model.forward()

error: OpenCV(4.5.5) /io/opencv/modules/dnn/src/layers/convolution_layer.cpp:404: error: (-2:Unspecified error) Number of input channels should be multiple of 3 but got 224 in function 'getMemoryShapes'

Thanks in advance :)


You should check input shapes. I think your onnx model input shape is (None, 224, 224, 3), and you are putting (None, 3, 224, 224) (blob) into the model.

Try this instead:

img = cv2.imread(~)
img = cv2.resize(img,(224,224))
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
# add N dim
img = np.expand_dims(img, axis=0) # 1 224 224 3
input_data = cv2.normalize(img,None,0,1,cv2.NORM_MINMAX,dtype=cv2.CV_32F)
model.setInput(input_data)