Connect AWS route53 domain name with K8s LoadBalancer Service
What I'm trying to do
Create a Kubernetes environment with a single API gateway service that is mapped to a DNS address.
What I have done:
1) I went to AWS Route53 service and created a subdomain.
2) That subdomain seems to have a static IP. I got this IP by pinging the domain name.
3) I have set up a Kubernetes cluster on AWS with kops.
4) I have a gateway service who's endpoints hit microserviecs within the k8s infrastructure.
This service is of type LoadBalancer
, where the loadBalancerIP
is equal to the static IP from above.
The problem:
With the above setup, the service fails to create with Failed to ensure load balancer for service default/gateway-service: LoadBalancerIP cannot be specified for AWS ELB
.
So then I go reading what look like pretty good resources about K8s Ingress (Also) and an Nginx reverse proxy service. (And this one at the end) (Also this one).
My error has also been asked before as well, and again the answer seems to put another layer between my API gateway and the outside world.
Then after reading a lot about Nginx Ingress controllers, I'm really confused.
My questions
a) Is there a bigger reason to have another layer between the gateway and the outside world apart from compatibility?
b) Would what I tried work in Google Cloud Platform (Is this an AWS deployment specific problem)
c) Nginx ingress controller... What is the difference between an Nginx reverse proxy and the Kubernetes Ingress service? Because to me the words seem to be used interchangeably here.
d) There seem to be so many ways to do this, what is the current best (and easiest) method?
EDIT:
I implemented option-1 of Jonah's answer. Here are the configs in case someone wants something to copy paste.
gateway-service.yaml:
apiVersion: v1
kind: Service
metadata:
name: gateway-service
spec:
ports:
- name: "gateway"
port: 80
targetPort: 5000
selector:
app: "gateway"
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: "gateway"
spec:
replicas: 1
template:
metadata:
labels:
app: "gateway"
spec:
containers:
- image: <account_nr>.dkr.ecr.us-east-1.amazonaws.com/gateway
imagePullPolicy: Always
name: "gateway"
ports:
- containerPort: 5000
protocol: "TCP"
Then, create the subdomain in AWS Route53:
1) Create domain
2) New Record Set
3) Type A
(IPv 4)
4) Alias yes
5) Select the Alias target which matches the Service's external endpoint. (kubectl describe services gateway-service | grep LoadBalancer
)
There are five distinct pieces of infrastructure automation potentially at play:
- ip to node assignment
- dns name to ip mapping
- load balancing to member mapping
- kubernetes service ip to pod member mapping and sometimes to load balancer
- kubernetes ingress
Some of them can drive some of the others. They don't necessarily all play well together, and can compete with each other.
I haven't really looked at Amazon's kubernetes runtime, but outside of that, for doing the simple thing you want to do, I'm aware of at least 3 options:
- starting from kubernetes, create a service type=LoadBalancer to have it create an ELB. This will give you a unique domain name that you can make a CNAME record to in route53 to map your subdomain to. The ELB membership will be updated using similar automation as keeps services updated with pod ips. There are some limitations around layer 4 and layer 7 request balancing.
- start from an ELB, add k8s EC2 nodes as members of the ELB, and run ingress as a daemonset. there are a lot of variants on this, but this means the responsibility for ensuring membership in the ELB is correct is tied to the management of k8s on EC2, whether automated or manual. But this offers other points of control over layer 7 traffic routing.
- starting from kubernetes, use a tool called route53-mapper to drive route53 configuration from annotations on the service resources.
https://github.com/kubernetes/kops/tree/master/addons/route53-mapper
This is a simpler version of the first one, including TLS, but it seems slightly insane to use it for TLS because it seems to require keeping certs in service annotations, rather than where they belong, in secrets.
Responses:
Is there a bigger reason to have another layer between the gateway and the outside world apart from compatibility?
There is no requirement, this approach is solving for having both ELB and k8s owning automation. Generally, one doesn't want competing automation owners.
Would what I tried work in Google Cloud Platform (Is this an AWS deployment specific problem)
gcloud automation is different, and its load balancers can be given ips, because it has separately managed ip allocation. So to some extent this is an AWS specific problem.
Nginx ingress controller... What is the difference between an Nginx reverse proxy and the Kubernetes Ingress service? Because to me the words seem to be used interchangeably here.
They are interchangeable. One is an abstraction, the other is concrete.
Kubernetes Ingress is the abstraction that can be implemented in many different ways. Ingress comprises ingress resources, a controller, and a proxy which takes configuration. The controller watches the cluster for ingress resource changes, translates them into proxy-specific configuration, then reloads the proxy.
The nginx ingress controller is an implementation of this machinery using nginx. There are many others, using haproxy, and other proxies.
There seem to be so many ways to do this, what is the current best (and easiest) method?
See above. There are probably other ways as well.