Solution 1:

Cloning and kickstart works... once. They are not a solution for perpetually maintaining your systems. Unless you redeploy an entire server every time you need to upgrade a script or apply a patch.

For example, the other day I was having problem with SSH timing out. Adding some options to sshd_config fixed the problem. I edited my standard sshd_config file in my puppet repo and that took care of pushing out the updated file and restarting SSH on every server. Additionally, any new server I setup will get this updated config.

If I only used whole system cloning, I could edit the sshd_config on the master image, but then would have no easy way of updating the existing configuration files on all my servers.

Another big benefit of something like puppet is greater modularity. You may have an "apache" image or a "mysql" image, but what do you do if you need a server with apache AND mysql? This only gets worse the greater combination of services you have to deploy.

with my puppet config this is a simple matter of

include apache
include mysql::server
include ...

Lastly, another benefit puppet has is truly documenting how your servers are setup. Want to know what packages are installed or what files are modified? Just read the puppet configuration. If you use master images, you are constantly trying to keep the documentation in sync with the image.

Solution 2:

'Global unique' identifiers -- used for all kinds of things, by all kinds of platforms (MSWindows COM, *nix, replicated database) often incorporate nic, mac addresses as one part of ensuring 'unique' across a network. Cloning -- that might ignore that, duplicating -- can result in distributed, networked systems suddenly behaving in strange, non-deterministic ways (aka: race conditions -- which process gets the message first.) Beware!