Expose port 80 and 443 on Google Container Engine without load balancer
Yep, through externalIPs on the service. Example service I've used:
apiVersion: v1
kind: Service
metadata:
name: bind
labels:
app: bind
version: 3.0.0
spec:
ports:
- port: 53
protocol: UDP
selector:
app: bind
version: 3.0.0
externalIPs:
- a.b.c.d
- a.b.c.e
Please be aware that the IPs listed in the config file must be the internal IP on GCE.
In addition to ConnorJC's great and working solution: The same solution is also described in this question: Kubernetes - can I avoid using the GCE Load Balancer to reduce cost?
The "internalIp" refers to the compute instance's (a.k.a. the node's) internal ip (as seen on Google Cloud Platform -> Google Compute Engine -> VM Instances)
This comment gives a hint at why the internal and not the external ip should be configured.
Furthermore, after having configured the service for ports 80 and 443, I had to create a firewall rule allowing traffic to my instance node:
gcloud compute firewall-rules create your-name-for-this-fw-rule --allow tcp:80,tcp:443 --source-ranges=0.0.0.0/0
After this setup, I could access my service through http(s)://externalIp
If you only have exactly one pod, you can use hostNetwork: true
to achieve this:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: caddy
spec:
replicas: 1
template:
metadata:
labels:
app: caddy
spec:
hostNetwork: true # <---------
containers:
- name: caddy
image: your_image
env:
- name: STATIC_BACKEND # example env in my custom image
value: $(STATIC_SERVICE_HOST):80
Note that by doing this your pod will inherit the host's DNS resolver and not Kubernetes'. That means you can no longer resolve cluster services by DNS name. For example, in the example above you cannot access the static
service at http://static. You still can access services by their cluster IP, which are injected by environment variables.
This solution is better than using service's externalIP as it bypass kube-proxy, and you will receive the correct source IP.