Which EC2 instance best for Chef server?

I want to setup chef server as cheaply as possible, while leaving it enough room to run without crashing. The only article I found on the subject warned that RabbitMQ would crash on a micro instance due to insufficient memory.

The question is: what's the cheapest EC2 instance that can run chef server reliably, considering that I don't use CouchDB or RabbitMQ for anything else in my app, so would probably have to set them up exclusively for chef server on that same instance.


Solution 1:

A bigger factor than the number of nodes is the number of convergences - which translate to API hits - your clients are making when configuring nodes.

As you found, the Ruby API server is memory intensive, so a micro instance is going to feel cramped pretty quick. The CouchDB backend can be write intensive (depending on your convergences), so IO performance is a consideration. The search engine is normally fine, and you can increase the number of expander vnodes to handle the workload of indexing.

Generally, we have found that the c1.medium is the best bang for the buck instance size for a large variety of workloads, not just for the Chef Server, but for general application use. It does cost twice as much as an m1.small, though.

The Chef Server was designed for horizontal scale. It can start out on one system just fine, but as the size of your infrastructure increases, you may wish to split components out to separate systems. Depending on the economics of it, you might mix and match instance sizes for your workload by running the components on separate instances of their own. For more information about the configuration options on the Chef wiki.

Also, Opscode Hosted Chef might be an economical solution, as you would not have to worry about any of that.

Solution 2:

I have been running it reliably on a m1.small instance for almost 6 months.

My instance runs RightScale CentOS 5 image with chef server installed from RBEL Repo. My Chef server manages around 30 nodes and 6 environments currently.