TensorFlow ValueError: Cannot feed value of shape (64, 64, 3) for Tensor u'Placeholder:0', which has shape '(?, 64, 64, 3)'

I am new to TensorFlow and machine learning. I am trying to classify two objects a cup and a pendrive (jpeg images). I have trained and exported a model.ckpt successfully. Now I am trying to restore the saved model.ckpt for prediction. Here is the script:

import tensorflow as tf
import math
import numpy as np
from PIL import Image
from numpy import array


# image parameters
IMAGE_SIZE = 64
IMAGE_CHANNELS = 3
NUM_CLASSES = 2

def main():
    image = np.zeros((64, 64, 3))
    img = Image.open('./IMG_0849.JPG')

    img = img.resize((64, 64))
    image = array(img).reshape(64,64,3)

    k = int(math.ceil(IMAGE_SIZE / 2.0 / 2.0 / 2.0 / 2.0)) 
    # Store weights for our convolution and fully-connected layers
    with tf.name_scope('weights'):
        weights = {
            # 5x5 conv, 3 input channel, 32 outputs each
            'wc1': tf.Variable(tf.random_normal([5, 5, 1 * IMAGE_CHANNELS, 32])),
            # 5x5 conv, 32 inputs, 64 outputs
            'wc2': tf.Variable(tf.random_normal([5, 5, 32, 64])),
            # 5x5 conv, 64 inputs, 128 outputs
            'wc3': tf.Variable(tf.random_normal([5, 5, 64, 128])),
            # 5x5 conv, 128 inputs, 256 outputs
            'wc4': tf.Variable(tf.random_normal([5, 5, 128, 256])),
            # fully connected, k * k * 256 inputs, 1024 outputs
            'wd1': tf.Variable(tf.random_normal([k * k * 256, 1024])),
            # 1024 inputs, 2 class labels (prediction)
            'out': tf.Variable(tf.random_normal([1024, NUM_CLASSES]))
        }

    # Store biases for our convolution and fully-connected layers
    with tf.name_scope('biases'):
        biases = {
            'bc1': tf.Variable(tf.random_normal([32])),
            'bc2': tf.Variable(tf.random_normal([64])),
            'bc3': tf.Variable(tf.random_normal([128])),
            'bc4': tf.Variable(tf.random_normal([256])),
            'bd1': tf.Variable(tf.random_normal([1024])),
            'out': tf.Variable(tf.random_normal([NUM_CLASSES]))
        }

   saver = tf.train.Saver()
   with tf.Session() as sess:
       saver.restore(sess, "./model.ckpt")
       print "...Model Loaded..."   
       x_ = tf.placeholder(tf.float32, shape=[None, IMAGE_SIZE , IMAGE_SIZE , IMAGE_CHANNELS])
       y_ = tf.placeholder(tf.float32, shape=[None, NUM_CLASSES])
       keep_prob = tf.placeholder(tf.float32)

       init = tf.initialize_all_variables()

       sess.run(init)
       my_classification = sess.run(tf.argmax(y_, 1), feed_dict={x_:image})
       print 'Neural Network predicted', my_classification[0], "for your image"


if __name__ == '__main__':
     main()

When I run the above script for prediction I get the following error:

ValueError: Cannot feed value of shape (64, 64, 3) for Tensor u'Placeholder:0', which has shape '(?, 64, 64, 3)' 

What am I doing wrong? And how do I fix the shape of numpy array?


Solution 1:

image has a shape of (64,64,3).

Your input placeholder _x have a shape of (?,64,64,3).

The problem is that you're feeding the placeholder with a value of a different shape.

You have to feed it with a value of (1,64,64,3) = a batch of 1 image.

Just reshape your image value to a batch with size one.

image = array(img).reshape(1,64,64,3)

P.S: The fact that the input placeholder accepts a batch of images, means that you can run predicions for a batch of images in parallel. You can try to read more than 1 image (N images) and then build a batch of N images, using a tensor with shape (N,64,64,3)

Solution 2:

Powder's comment may go undetected like I missed it so many times,. So with the hope of making it more visible, I will re-iterate his point.

Sometimes using image = array(img).reshape(a,b,c,d) will reshape alright but from experience, my kernel crashes every time I try to use the new dimension in an operation. The safest to use is

np.expand_dims(img, axis=0)

It works perfect every time. I just can't explain why. This link has a great explanation and examples regarding its usage.