How convert a jpeg image into json file in Google machine learning

The first step is to make sure that the graph you export has a placeholder and ops that can accept JPEG data. Note that CloudML assumes you are sending a batch of images. We have to use a tf.map_fn to decode and resize a batch of images. Depending on the model, extra preprocessing of the data may be required to normalize the data, etc. This is shown below:

# Number of channels in the input image
CHANNELS = 3

# Dimensions of resized images (input to the neural net)
HEIGHT = 200
WIDTH = 200

# A placeholder for a batch of images
images_placeholder = tf.placeholder(dtype=tf.string, shape=(None,))

# The CloudML Prediction API always "feeds" the Tensorflow graph with
# dynamic batch sizes e.g. (?,).  decode_jpeg only processes scalar
# strings because it cannot guarantee a batch of images would have
# the same output size.  We use tf.map_fn to give decode_jpeg a scalar
# string from dynamic batches.
def decode_and_resize(image_str_tensor):
  """Decodes jpeg string, resizes it and returns a uint8 tensor."""

  image = tf.image.decode_jpeg(image_str_tensor, channels=CHANNELS)

  # Note resize expects a batch_size, but tf_map supresses that index,
  # thus we have to expand then squeeze.  Resize returns float32 in the
  # range [0, uint8_max]
  image = tf.expand_dims(image, 0)
  image = tf.image.resize_bilinear(
      image, [HEIGHT, WIDTH], align_corners=False)
  image = tf.squeeze(image, squeeze_dims=[0])
  image = tf.cast(image, dtype=tf.uint8)
  return image

decoded_images = tf.map_fn(
    decode_and_resize, images_placeholder, back_prop=False, dtype=tf.uint8)

# convert_image_dtype, also scales [0, uint8_max] -> [0, 1).
images = tf.image.convert_image_dtype(decoded_images, dtype=tf.float32)

# Then shift images to [-1, 1) (useful for some models such as Inception)
images = tf.sub(images, 0.5)
images = tf.mul(images, 2.0)

# ...

Also, we need to be sure to properly mark the inputs, in this case, it's essential that the name of the input (the key in the map) end in _bytes. When sending base64 encoded data, it will let the CloudML prediction service know it needs to decode the data:

inputs = {"image_bytes": images_placeholder.name}
tf.add_to_collection("inputs", json.dumps(inputs))

The data format that the gcloud command is expecting will be of the form:

{"image_bytes": {"b64": "dGVzdAo="}}

(Note, if image_bytes is the only input to your model you can simplify to just {"b64": "dGVzdAo="}).

For example, to create this from a file on disk, you could try something like:

echo "{\"image_bytes\": {\"b64\": \"`base64 image.jpg`\"}}" > instances

And then send it to the service like so:

gcloud beta ml predict --instances=instances --model=my_model

Please note that when sending data directly to the service, the body of the request you send needs to be wrapped in an "instances" list. So the gcloud command above actually sends the following to the service in the body of the HTTP request:

{"instances" : [{"image_bytes": {"b64": "dGVzdAo="}}]}

Just to pile onto the previous answer...

Google published a blog post on the image recognition task and some associated code that will directly address your question and several more you may discover. It includes an images_to_json.py file to help with building the json request