Prometheus not connected to alert manager in GKE
I installed kube-prometheus-stack 15.3.1 into a GKE cluster using helm (in "monitoring" namespace). I used the values.yaml
to open up ingresses on some of the components and to add SMTP info and reciever details into the alert manager. For the most part everything seems fine, except Prometheus is firing a number of alerts, and I'm not getting any alert emails.
One firing alert is:
PrometheusNotConnectedToAlertmanagers
Prometheus monitoring/prometheus-kube-prometheus-stak-prometheus-0 is not connected to any Alertmanagers
Another one is:
PrometheusOperatorSyncFailed
Controller alertmanager in monitoring namespace fails to reconcile 1 objects.
I've also tried opening an ingress to the alertmanager and pointed alerts.mydomiain.com
to it, but when I try to any GET requests (such as alerts.mydomain.com/v2/status
) I always get a 502 server error.
What do I need to do to get my alertmanager working?
here is the output of kubectl get pods,svc,daemonset,deployment,statefulset -n monitoring
:
NAME READY STATUS RESTARTS AGE
pod/kube-prometheus-stack-grafana-58f7fcb497-hm72h 2/2 Running 0 30h
pod/kube-prometheus-stack-kube-state-metrics-6d588499f5-d957b 1/1 Running 0 2d3h
pod/kube-prometheus-stack-operator-54f89674c9-k8ml7 1/1 Running 0 2d3h
pod/kube-prometheus-stack-prometheus-node-exporter-22vpd 1/1 Running 0 3h57m
pod/kube-prometheus-stack-prometheus-node-exporter-2qsl9 1/1 Running 0 3h57m
pod/kube-prometheus-stack-prometheus-node-exporter-4d27n 1/1 Running 0 7h36m
pod/kube-prometheus-stack-prometheus-node-exporter-7rlnk 1/1 Running 0 4h47m
pod/kube-prometheus-stack-prometheus-node-exporter-7xlf4 1/1 Running 0 4h51m
pod/kube-prometheus-stack-prometheus-node-exporter-9mfnt 1/1 Running 0 3h57m
pod/kube-prometheus-stack-prometheus-node-exporter-9zblf 1/1 Running 0 2d3h
pod/kube-prometheus-stack-prometheus-node-exporter-bdcjj 1/1 Running 0 2d3h
pod/kube-prometheus-stack-prometheus-node-exporter-bs54w 1/1 Running 0 4h47m
pod/kube-prometheus-stack-prometheus-node-exporter-fp95h 1/1 Running 0 2d3h
pod/kube-prometheus-stack-prometheus-node-exporter-h4zhw 1/1 Running 0 2d3h
pod/kube-prometheus-stack-prometheus-node-exporter-pz8js 1/1 Running 0 3h58m
pod/kube-prometheus-stack-prometheus-node-exporter-rrrhk 1/1 Running 0 27h
pod/kube-prometheus-stack-prometheus-node-exporter-rszlt 1/1 Running 0 2d3h
pod/kube-prometheus-stack-prometheus-node-exporter-s62wq 1/1 Running 0 4h47m
pod/kube-prometheus-stack-prometheus-node-exporter-w9dmb 1/1 Running 0 5h32m
pod/kube-prometheus-stack-prometheus-node-exporter-xqmxk 1/1 Running 0 4h51m
pod/prometheus-kube-prometheus-stack-prometheus-0 2/2 Running 1 30h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kube-prometheus-stack-alertmanager NodePort 10.125.4.161 <none> 9093:30903/TCP 2d3h
service/kube-prometheus-stack-grafana NodePort 10.125.7.177 <none> 80:32444/TCP 2d3h
service/kube-prometheus-stack-kube-state-metrics ClusterIP 10.125.2.56 <none> 8080/TCP 2d3h
service/kube-prometheus-stack-operator ClusterIP 10.125.4.171 <none> 443/TCP 2d3h
service/kube-prometheus-stack-prometheus NodePort 10.125.13.11 <none> 9090:30090/TCP 2d3h
service/kube-prometheus-stack-prometheus-node-exporter ClusterIP 10.125.10.231 <none> 9100/TCP 2d3h
service/prometheus-operated ClusterIP None <none> 9090/TCP 2d3h
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/kube-prometheus-stack-prometheus-node-exporter 17 17 17 17 17 <none> 2d3h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/kube-prometheus-stack-grafana 1/1 1 1 2d3h
deployment.apps/kube-prometheus-stack-kube-state-metrics 1/1 1 1 2d3h
deployment.apps/kube-prometheus-stack-operator 1/1 1 1 2d3h
NAME READY AGE
statefulset.apps/prometheus-kube-prometheus-stack-prometheus 1/1 42h
I realised that the alertmanager pod was missing even though the service was there. I found I could get the pod back by uninstalling the prometheus stack then reinstalling it with default values, then upgrading it with my own values.
Now the PrometheusNotConnectedToAlertmanagers alert had stopped firing, but still I was not getting emails. Now I could access the alert manager through the ingress and see that the config for it that I had put in the Helm values file did not go through to the alert manager - it still had default config.
I found I was having the issue described here and checking the logs in the kube-prometheus-stack operator pod confirmed it. I needed to have a "null" receiver in my alert manager receivers (which I had removed)