ambassador service stays "pending"

Currently running a fresh "all in one VM" (stacked master/worker approach) kubernetes v1.21.1-00 on Ubuntu Server 20 LTS, using

  • cri-o as container runtime interface
  • calico for networking/security

also installed the kubernetes-dashboard (but I guess that's not important for my issue 😉). Taking this guide for installing ambassador: https://www.getambassador.io/docs/edge-stack/latest/topics/install/yaml-install/ I come along the issue that the service is stuck in status "pending".

kubectl get svc -n ambassador prints out the following stuff

NAME               TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
ambassador         LoadBalancer   10.97.117.249    <pending>     80:30925/TCP,443:32259/TCP   5h
ambassador-admin   ClusterIP      10.101.161.169   <none>        8877/TCP,8005/TCP            5h
ambassador-redis   ClusterIP      10.110.32.231    <none>        6379/TCP                     5h
quote              ClusterIP      10.104.150.137   <none>        80/TCP                       5h

While changing the type from LoadBalancer to NodePort in the service sets it up correctly, I'm not sure of the implications coming along. Again, I want to use ambassador as an ingress component here - with my setup (only one machine), "real" loadbalancing might not be necessary.

For covering all the subdomain stuff, I setup a wildcard recording for pointing to my machine, means I got a CNAME for *.k8s.my-domain.com which points to this host. Don't know, if this approach was that smart for setting up an ingress.


It gets the IP address when you change type to NodePort because it uses a node's IP address. It can't do it with LoadBalancer because you are running a kubernetes cluster on VM and it doesn't have any LoadBalancers available. Tutorial is designed for minikube or clouds users. You can see it here:

Note: If you are a Minikube user, Minikube does not natively support load balancers. Instead, use minikube service list

I also got this reproduced on a cluster which was set up with kubeadm on my VMs.

kubectl get services -n ambassador

NAME               TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)                      AGE
ambassador         LoadBalancer   10.105.124.148   <pending>       80:30000/TCP,443:31375/TCP   1m15s
ambassador-admin   ClusterIP      10.101.160.245   <none>          8877/TCP,8005/TCP            1m15s
ambassador-redis   ClusterIP      10.110.123.244   <none>          6379/TCP                     1m15s

One of the simplest solutions which can help you is using metal load balancer. This was specifically designed for bare metal clusters to give them a loadbalancing ability.

Here's a link to metallb installation.

Keep in mind that:

The installation manifest does not include a configuration file. MetalLB’s components will still start, but will remain idle until you define and deploy a configmap.

Next part is to set up a ConfigMap. To do so you need to create a ConfigMap with appropriate protocol you want to use. I used a simple layer 2 config:

metallb-configmap.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 192.168.1.240-192.168.1.250

And apply it with kubectl apply -f metallb-configmap.yaml.

Immediately after I did it, ambassador service received the IP address:

NAME               TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)                      AGE
ambassador         LoadBalancer   10.105.124.148   192.168.1.241   80:30000/TCP,443:31375/TCP   4m2s
ambassador-admin   ClusterIP      10.101.160.245   <none>          8877/TCP,8005/TCP            4m2s
ambassador-redis   ClusterIP      10.110.123.244   <none>          6379/TCP                     4m2s

Simple test with curl 192.168.1.241 works!

If you have any questions about ingress doesn't work, consider asking this in a different question. See best practices for stack community.