Managing cluster of linux computers behind firewalls
Pull updates, don't push
As you scale, it's going to become unfeasible to do push updates to all your products.
- You'll have to track every single customer, who might each have a different firewall configuration.
- You'll have to create incoming connections through the customer's firewall, which would require port-forwarding or some other similar mechanism. This is a security risk to your customers
Instead, have your products 'pull' their updates periodically, and then you can add extra capacity server-side as you grow.
How?
This problem has already been solved, as you suggested. Here's several approaches I can think of.
-
using apt: Use the built-in apt system with a custom PPA and sources list. How do I setup a PPA?
- Con: Unless you use a public hosting service like launchpad, Setting up your own apt PPA + packaging system is not for the faint of heart.
-
using ssh: Generate an SSH public key for each product, and then add that device's key to your update servers. Then, just have your software
rsync
/scp
the files required.- Con: Have to track (and backup!) all the public keys for each product you send out.
- Pro: More secure than a raw download, since the only devices that can access the updates would be those with the public key installed.
-
raw download + signature check:
- Post a signed update file somewhere (Amazon S3, FTP server, etc)
- Your product periodically checks for the update file to be changed, and then downloads / verifies the signature.
- Con: Depending on how you deploy this, the files may be publicly accessible (which may make your product easier to reverse engineer and hack)
ansible: Ansible is a great tool for managing system configurations. It's in the realm of puppet / chef, but is agentless (uses python) and designed to be idempotent. If deploying your software would require a complicated bash script, I'd use a tool like this to make it less complicated to perform your updates.
Of course, there are other ways to do this.. But it brings me to an important point.
Sign / validate your updates!
No matter what you do, it's imperative that you have a mechanism to ensure that your update hasn't been tampered with. A malicious user could impersonate your update server in any of the above configurations. If you don't validate your update, your box is much easier to hack and get into.
A good way to do this is to sign your update files. You'll have to maintain a certificate (or pay someone to do so), but you'll be able to install your fingerprint on each of your devices before you ship them out so that they can reject updates that have been tampered with.
Physical Security
Of course, if someone has physical access to the customer's deployment, they could easily take over the server. But at least they can't attack the other deployments! Physical security is likely the responsibiltiy of your customer.
If you would for a moment, imagine what would happen if you used a large OpenVPN network for updates... They could then use the compromised server to attack every instance on the VPN
Security
Whatever you do, security needs to be built in from the beginning. Don't cut corners here - You'll regret it in the end if you do.
Fully securing this update system is out of scope of this post, and I strongly recommend hiring a consultant if you or someone on your team isn't knowledgeable in this area. It's worth every penny.
Do you actually need to access them?
Or just update them? Because you can have them update themselves, similar to how apt updates on it's own unattended.
If you need to login
Why not an OpenSSH daemon listening via port forwarding? Each customer can have a separate key for security and would only be connected when needed.
Up to your customers
You need to take into consideration what the customer is willing to accept as well. They might not be comfortable with any remote access to their network, or only comfortable with specific technologies/configurations.
I suggest an orchestration tool like Puppet or Salt.
Salt is a message queue and can make a persistent outbound connection from your appliance to a master server. You can use this to run arbitrary commands on the appliances... like an apt-get
.
The other option is Puppet, where you still have a master server and the clients make outbound connections from their locations.
I use both of these tools for a similar purpose where I may not have administrative control of the firewall.