Kubernetes / kubectl - "A container name must be specified" but seems like it is?

I'm debugging log output from kubectl that states:

Error from server (BadRequest): a container name must be specified for pod postgres-operator-49202276-bjtf4, choose one of: [apiserver postgres-operator]

OK, so that's an explanatory error message, but looking at my JSON template it ought to just create both containers specified, correct? What am I missing? (please forgive my ignorance.)

I'm using just a standard kubectl create -f command to create the JSON file within a shell script. The JSON deployment file is as follows:

{
    "apiVersion": "extensions/v1beta1",
    "kind": "Deployment",
    "metadata": {
        "name": "postgres-operator"
    },
    "spec": {
        "replicas": 1,
        "template": {
            "metadata": {
                "labels": {
                    "name": "postgres-operator"
                }
            },
            "spec": {
                "containers": [{
                    "name": "apiserver",
                    "image": "$CCP_IMAGE_PREFIX/apiserver:$CO_IMAGE_TAG",
                    "imagePullPolicy": "IfNotPresent",
                    "env": [{
                        "name": "DEBUG",
                        "value": "true"
                    }],
                    "volumeMounts": [{
                        "mountPath": "/config",
                        "name": "apiserver-conf",
                        "readOnly": true
                    }, {
                        "mountPath": "/operator-conf",
                        "name": "operator-conf",
                        "readOnly": true
                    }]
                }, {
                    "name": "postgres-operator",
                    "image": "$CCP_IMAGE_PREFIX/postgres-operator:$CO_IMAGE_TAG",
                    "imagePullPolicy": "IfNotPresent",
                    "env": [{
                        "name": "DEBUG",
                        "value": "true"
                    }, {
                        "name": "NAMESPACE",
                        "valueFrom": {
                            "fieldRef": {
                                "fieldPath": "metadata.namespace"
                            }
                        }
                    }, {
                        "name": "MY_POD_NAME",
                        "valueFrom": {
                            "fieldRef": {
                                "fieldPath": "metadata.name"
                            }
                        }
                    }],
                    "volumeMounts": [{
                        "mountPath": "/operator-conf",
                        "name": "operator-conf",
                        "readOnly": true
                    }]
                }],
                "volumes": [{
                    "name": "operator-conf",
                    "configMap": {
                        "name": "operator-conf"
                    }
                }, {
                    "name": "apiserver-conf",
                    "configMap": {
                        "name": "apiserver-conf"
                    }
                }]
            }
        }
    }
}

Solution 1:

If a pod has more than 1 containers then you need to provide the name of the specific container.

in your case, There is a pod (postgres-operator-49202276-bjtf4) which has 2 containers (apiserver and postgres-operator ). following commands will provide logs for the specific containers

kubectl logs deployment/postgres-operator -c apiserver


kubectl logs deployment/postgres-operator -c postgres-operator

Solution 2:

A container name must be given if the pod is having more than one containers (as mentioned in above answers).

To know all the containers inside a pod we can use:

kubectl -n <NAMESPACE> get pods <POD_NAME> -o jsonpath="{..image}"