Connecting to multiple AWS VPCs via a single VPN connection

I'm looking at locking down an existing AWS setup. At the moment, everything is within a single default VPC (test, staging, production) and all within public subnets.

My plan is to seperate the 3 environments into 3 different VPCs, utilising public and private subnets.

We also want to restrict access to the servers as currently they have public IPs and anyone can get to them. Ideally I'd just do IP whitelisting, however this a remote team and they work all over the place, so I have dynamic IPs to contend with.

My plan is to use an OpenVPN instance and get people to connect via that. From my extremely limited understanding (software engineer background, very limited networking experience), this should work.

My question is, we will have 3 VPCs, and the VPN instance will need to live in one of them. How best should I allow access to the other 2 VPCs via the VPN? Is that the best approach (direct access), or should I have a VPN connection per VPC. The latter seems overkill, but I'm not sure.

Thanks.

Edit: Or another option is to just have a single VPC, but isolate test, staging, and production using subnets? I read that a few people do this, albeit not ideal.


Best option is to actually utilize the VPN that AWS already includes in their VPC setup. Speaking from having already set up what you're trying to do. Assuming having your users connect to a central location, like an office or data center is an option. If it's not, then an expanded version of the setup below would work, adding another VPN instance for people to connect to.

If you need the VPCs to talk to each other as well, you'd want to set up multiple VPN instances, at least one per VPC, preferably more than one for redundancy, but to do that you'd need another instance to control the failover and update AWS's routing tables with the new path.

Option 1:

A central VPN server for users to connect to in AWS with tunnels created on it to route traffic to your other VPCs. You would need other instances in the separate VPCs for VPN tunnel creation.

Option 2:

A central VPN server for users to connect to in AWS. One or more other VPN instances per VPC set up with tunnels for connectivity to the other VPCs.

Option 3:

AWS VPN functionality to a central office or data center where a user VPN is set up. One or more VPN instances in AWS with tunnels set up for connectivity between VPCs.

Amazon unfortunately doesn't have setups for VPN between VPCs, so in cases where I'm suggesting a tunnel, you'd of course need a set of instances, at least, for each tunnel setup.


I actually think the answer is found in this AWS documentation: Configurations with Routes to an Entire CIDR Block

Following this configuration I was able to have a single VPN instance running

Add routing for the peering connection in the route table associated with subnet that have my vpn box, in the same way on the peered VPC's subnet where the boxes I want to access resides

I was able to ssh into those boxes without an issue