Static IP for outbound traffic over multiple servers

I have a cluster of servers (AWS EC2) that need to contact a specific host.

This host is currently whitelisting all the server's IPs.

The problem is that we are adding more servers to the cluster, and have to contact many customers to whitelist the new servers.

So I am wondering if there is a way to route all the outbound requests through a specific IP (or pre-defined set of IPs), so customers dont need to worry about white-listening new servers as it comes.


Solution 1:

Option 1: set up a Proxy server and configure your servers to access the other host. That way all the traffic will appear to come from the Proxy server’s IP address.

The drawback is that it’s not suitable for massive data transfers, the proxy would then be a bottleneck.

Option 2: move your servers to a Private Subnet in your VPC and route all their outbound traffic through a NAT Gateway in the public subnet. Again, all the traffic will appear to come from the NAT Gateway’s IP address.

The drawback is that your servers won’t be publicly accessible, not sure if you need that or not. You may be sble to work around that with an Application or Network Load Balancer (ALB/NLB).

Option 3: set up a VPN of some sort. Either a simple Client-to-server OpenVPN from each instance to the other host, or a more elaborate Site-to-site VPN, again could be OpenVPN based or IPSec based.

Hope that helps :)

Solution 2:

Implement IPv6. Reserve a subnet for this thing. Document the /64 assigned to the subnet in question, for allow lists. Put as many hosts in there as you need.

Should the excuse be v6 is more complicated, is it? A contiguous enormous subnet sure seems more convenient than fragmented one-off addresses.