Kubernetes managing many distinct UDP servers on GKE
Posting this community wiki answer to set more of a baseline approach to this question rather than to give a definitive solution.
Feel free to edit and expand.
You can expose your applications with Services
. There are a few options where each is different in some way from another:
ClusterIP
: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the defaultServiceType
.- NodePort: Exposes the Service on each Node's IP at a static port (the
NodePort
). AClusterIP
Service, to which theNodePort
Service routes, is automatically created. You'll be able to contact theNodePort
Service, from outside the cluster, by requesting<NodeIP>:<NodePort>
.- LoadBalancer: Exposes the Service externally using a cloud provider's load balancer.
NodePort
andClusterIP
Services, to which the external load balancer routes, are automatically created.- ExternalName: Maps the Service to the contents of the
externalName
field (e.g.foo.bar.example.com
), by returning aCNAME
record with its value. No proxying of any kind is set up.-- Kubernetes.io: Docs: Concepts: Services networking: Service: Publishing services service types
The documentation specific to exposing apps on Google Kubernetes Engine
can be found here:
- Cloud.google.com: Kubernetes Engine: Docs: How to: Exposing apps
Focusing specifically on some of the points included in the question:
I can use a NodePort service, and lose control over which port the client needs to connect to. That's an issue because the server registers itself with a server listing.
You can specify the NodePort
port in the Service
YAML (like nodePort: 32137
or nodePort: 30911
).
You could configure your application to listen on the same port as nodePort
:
- Application is listening on port
30000
- Service is using a
nodePort
withport
:30000
(client/user should connect to this port) and targetPort:30000
. In that case there would be no port changes.
A side note!
By default the
nodePort
port range is blocked byGCP
Firewall. You will need to create a rule (or set of rules) that would allow it.
I can use host networking. If my information is correct, that requires privileged containers, which is Definitely Not Good.
I would be advising against using privileged containers unless a good reason is behind it. Citing the official documentation:
The Privileged policy is purposely-open, and entirely unrestricted. This type of policy is typically aimed at system- and infrastructure-level workloads managed by privileged, trusted users.
-- Kubernetes.io: Docs: Concepts: Security: Pod security standard: Privileged
The port can be configured, so long as I know which port the server needs to run on before the pod starts up.
As you will have a multitude of single Pods
(each with a separate Deployment
) you could parametrize each of it. What I mean is that you can create a template and modify only the parts of your manifests (like ports, env variables, etc.).
You can pass the environment variable to your Pod
so that it can be used as a parameter in your commands. You can also modify the command that the Pod
is starting with
- Kubernetes.io: Docs: Tasks: Inject data application: Define environment variable container: Define an environment variable for a container
- Kubernetes.io: Docs: Tasks: Inject data application: Define command argument container