Expose both TCP and UDP from the same ingress
Most cloud providers do not support UDP load-balancing or mix protocols and might have cloud-specific methods to bypass this issue.
The DigitalOcean CPI does not support mixed protocols in the Service definition, it accepts TCP only for load balancer Services. It is possible to ask for HTTP(S) and HTTP2 ports with Service annotations.
Summary: The DO CPI is the current bottleneck with its TCP-only limitation. As long as it is there the implementation of this feature will have no effect on the DO bills.
See more: do-mixed-protocols.
A simple solution that may solve your problem is to create a reverse proxy system on a standalone server - using Nginx and route UDP and TCP traffic directly to your Kubernetes service.
Follow these two steps:
1. Create a NodePort service application
2. Create a small server instance and run Nginx with LB config on it
Use NodePort type of service which will expose your application on your cluster nodes, and makes them accessible through your node IP on a static port. This type supports multi-protocol services. Read more about services here.
apiVersion: v1
kind: Service
metadata:
name: test-service
namespace: default
spec:
type: NodePort
ports:
- name: tcp
protocol: TCP
port: 5555
targetPort: 5555
nodePort: 30010
- name: udp
protocol: UDP
port: 5556
targetPort: 5556
nodePort: 30011
selector:
tt: test
For example this service exposes test
pods’ port 5555 through nodeIP:30010
with the TCP protocol and port 5556 through nodeIP:30011
with UDP. Please adjust ports according to your needs, this is just an example.
Then create a small server instance and run Nginx
with LB config.
For this step, you can get a small server from any cloud provider.
Once you have the server, ssh
inside and run the following to install Nginx:
$ sudo yum install nginx
In the next step, you will need your node IP addresses, which you can get by running:
$ kubectl get nodes -o wide.
Note: If you have private cluster without external access to your nodes, you will have to set up a point of entry for this use ( for example NAT gateway).
Then you have to add the following to your nginx.conf
(run command $ sudo vi /etc/nginx/nginx.conf
):
worker_processes 1;
events {
worker_connections 1024;
}
stream {
upstream tcp_backend {
server <node ip 1>:30010;
server <node ip 2>:30010;
server <node ip 3>:30010;
...
}
upstream udp_backend {
server <node ip 1>:30011;
server <node ip 2>:30011;
server <node ip 3>:30011;
...
}
server {
listen 5555;
proxy_pass tcp_backend;
proxy_timeout 1s; }
server {
listen 5556 udp;
proxy_pass udp_backend;
proxy_timeout 1s;
}
}
Now you can start your Nginx server using command:
$ sudo /etc/init.d/nginx start
If you have already started you Nginx server before applying changes to your config file, you have to restart it - execute commands below:
$ sudo netstat -tulpn # Get the PID of nginx.conf program
$ sudo kill -2 <PID of nginx.conf>
$ sudo service nginx restart
And now you have UDP/TCP LoadBalancer which you can access through <server IP>:<nodePort>
.
See more: tcp-udp-loadbalancer.