Detecting Kubernetes OOMKilled Events in GKE Logs

Solution 1:

Although the OOMKilled event isn't present in the logs, if you can detect that a pod was killed you can then use kubectl get pod -o go-template=... <pod-id> to determine the reason. As an example straight from the docs:

[13:59:01] $ ./cluster/kubectl.sh  get pod -o go-template='{{range.status.containerStatuses}}{{"Container Name: "}}{{.name}}{{"\r\nLastState: "}}{{.lastState}}{{end}}'  simmemleak-60xbc
Container Name: simmemleak
LastState: map[terminated:map[exitCode:137 reason:OOM Killed startedAt:2015-07-07T20:58:43Z finishedAt:2015-07-07T20:58:43Z containerID:docker://0e4095bba1feccdfe7ef9fb6ebffe972b4b14285d5acdec6f0d3ae8a22fad8b2]]

If you're doing this programmatically a better alternative to relying on kubectl output is to use the Kubernetes REST API GET /api/v1/pods method. Methods for accessing the API are also given in the documentation.