what is better: three small VPS or a bigger one VPS

Solution 1:

I respectfully disagree with chaos' suggestion. Having several VPS won't distribute the load equally, and the VPS that serves static files will be probably vastly under used. Also it increases the complexity of your application.

I'd go with a big fat server, increase its capacity as needed, and only consider partitioning when it's no longer feasible to upgrade it.

Solution 2:

I'd go with the three. If they wound up deployed to the same box, you'd be losing a little performance relative to a single VPS, but 1) they probably won't be, 2) it'll be easier to tune them for their roles than to tune a single VPS for all their roles, and 3) it means your application will be designed for distributed roles from day 1, so that if you need to get beefier later, maybe deploy a real server for each role, you're ready for it.

Solution 3:

I'll buck the trend and say that you should go with 2 - 1 for your web content and another (likely bigger) for your DB. Hell, I'm in a situation where I'm running a single VPS for all my needs, including DB, with appropriate subdomains set up:

static.example.org: handles css, js, images, etc content. Set with keep-alives on and future expires on. (content doesn't expire for a year or more, so no further requests are made. Keep-alives on as most web page views will try to load many static pages, so this will speed up these requests)

www.example.org: handle the dynamic requests. Keeping static requests separate from dynamic ones is important for the future scalability of your system - and it's not that much in the way of pre-mature optimization. Set with keep-alives off and future expires off. (content validation must occur with dynamic content. Having keep-alives off (or really low) allows you to save connections for incoming requests... especially when many hits will be single and slower view requests.)

Having nginx as your front-end proxy that handles the static.example.org requests itself but passes off www.example.org requests to a FastCGI backend (for example) has proven to be a speedy solution for us - and a memory conservative one too. Alternatively, you could put all of your static content on Amazon S3 or something and point your web pages to that instead (with future expires on).

My first point of expansion will likely be to move my DB to a separate server. I'll be able to scale out the web server FastCGI processes to multiple systems using nginx easily enough - spreading the load should be fairly easy... in theory.