Puppet Security and Network Topologies

Solution 1:

Because i sometimes store passwords in variables in my modules, to be able to deploy applications without having to finish configuration manually, it means that i can not decently put my puppet repo on a public server. Doing so would mean that attacking the puppetmaster would permit to gain some app or db passwords of all our different applications on all our servers.

So my puppetmaster is in our office private network, and i do not run puppetd daemon on the servers. When i need to deploy, i use ssh from private net to servers, creating a tunnel and remotely calling puppetd.
The trick is not to set the remote tunnel and puppet client to connect to the puppetmaster, but to a proxy that accept http connect and can reach the puppetmaster server on private network. Otherwise puppet will refuse to pull because of hostname conflict with certificates

# From a machine inside privatenet.net :
ssh -R 3128:httpconnectproxy.privatenet.net:3128 \
    -t remoteclient.publicnetwork.net \
    sudo /usr/sbin/puppetd --server puppetmaster.privatenet.net \
    --http_proxy_host localhost --http_proxy_port 3128 \
    --waitforcert 60 --test –-verbose

It works for me, hopes it helps you

Solution 2:

We have two sites, our office and our colo. Each site has its own puppetmaster. We set up an svn repository with the following structure:

root/office
root/office/manifests/site.pp
root/office/modules
root/colo
root/colo/manifests/site.pp
root/colo/modules
root/modules

The modules directory under each site is an svn:externals directory back to the top level modules directory. This means that they share exactly the same modules directory. We then make sure that the vast majority of the classes we write are under the modules directory and used by both sites. This has the nice advantage of forcing us to think generically and not tie a class to a particular site.

As for security, we host our puppetmaster (and the rest of our network) behind our firewall, so we're not that concerned about storing the config centrally. The puppetmaster will only send out config to hosts it trusts. Obviously you need to keep that server secure.

Solution 3:

I can't make a judgment on how necessary your paranoia is, it highly depends on your environment. However, I can say with confidence that the two major points of your existing configuration can still apply. You can ensure your change traverse from a secure environment (the repository at your office) to the less secure environment, wherever your puppetmaster is located. You change the process from SFTP'ing to a bunch of servers and manually putting files in to place to SFTP'ing to your puppetmaster and letting Puppet distribute the files and put them in the correct place. Your master store is still the repository, and your risks are mitigated.

I don't believe either push or pull are inherently safer than the other model. Puppet does a great job of securing the configurations in transit, as well as authenticating both client and server to ensure there is a two-way trust in place.

As for the multiple networks - we handle it with a central "master" puppetmaster with sattelite puppetmasters at each location acting as clients to the central master.

Solution 4:

One design approach is to have a puppetmaster local to each site of systems and use a deployment tool to push changes to the puppetmasters. (Using git with git hooks could work too).

This would preserve your concern about listening services on a public network as the puppet network traffic would only be internal.

It's also possible to push the manifests out to each server and have the puppet client parse the manifests and apply the relevant configs.