Problems validating Puppet manifests with "puppet parser validate"

I'm using puppet parser validate in a git pre-commit hook in order to spot problems before committing files to our Puppet configuration repository. Unfortunately, this command appears to be a very lightweight syntax check that only marks errors such unbalanced quotes and brackets.

The validate command does not appear to actually parse the configuration and look for things like invalid attributes, undefined references, and so forth. For example, the following will not result in a complaint:

file { 'somefile': requires => File['some-other-file'] }

In this example, requires should be require. Similarly, this also generates no errors:

file {'somefile': require => File['file-that-does-not-exist']}

There is no resource definition for file-that-does-not-exist.

Is there any way to catch these sorts of errors without actually applying the configuration? I was hoping for some sort of flag on the puppet apply command that would completely parse a configuration without making changes, but as far as I can tell no such option exists in Puppet 2.7.1.

UPDATE

puppet apply --noop appears to try too hard in the other direction. It will try to stat() any file referenced in the manifest, which will often cause it to fail with permission errors if it attempts to stat() a file that is not accessible to the current user.

What are other folks doing?


Solution 1:

In short, this is a non trivial problem and not easily solved by parsing the manifests. Compiling the catalog can expand the scope of testing, but it's not a panacea. puppet master --compile require access to the node facts, and ideally a dummy node that fully test all classes. You still have to deal with the limitations of:

  • classes that are can't be in the same catalog (apache, apache::disable)
  • cross class dependency.
  • different OS platforms.
  • nodes with different parameters.

For example, if node one include a and b, it's fine, but node two only require b, it's only a failure you'll see with node two.

class a {
  notify { 'hi': }
}
class b {
  notify { 'bye':
    require => Notify['hi'],
  }
}

If you have the resources, you can compile catalog for all nodes and that will provide fairly comprehensive coverages.

puppet apply --noop have it's limitations as well, off the top of my head: it will fail an exec that's deployed by a package, it will fail files depending on a staging location, and it's not going to test multiple platforms unless you expand testing to a representative sample of your systems. In general it providers sufficient coverage to ensure no compilation issues, give you an idea what systems are affected, what are the changes, and you can judge by the reports whether the changes are ok or a real problem.

In most cases noop is sufficient, I've seen varying degrees of automated testing, such as jenkins where each modules tests files are simulated with --noop (limitations above applies), or using Vagrant to spawn off VMs to perform full blown testing.

Solution 2:

You may want to consider bootstrapping a test environment, such as Cucumber-puppet.

https://github.com/nistude/cucumber-puppet

Solution 3:

To get a bit more validation that the resources and attributes are sensible, you could compile a sample node catalog with puppet master --compile. This should catch the first example.

I'm not sure off the top of my head whether resource references (the second example) are verified on the master or client, but you could always execute it in no-op mode with puppet catalog apply, or puppet apply. The latter would compile it again and then apply it, while the former should be able to take the compiled catalog from the earlier validation.