Different environments for Terraform (Hashicorp)

I suggest you take a look at the hashicorp best-practices repo, which has quite a nice setup for dealing with different environments (similar to what James Woolfenden suggested).

We're using a similar setup, and it works quite nicely. However, this best-practices repo assumes you're using Atlas, which we're not. We've created quite an elaborate Rakefile, which basically (going by the best-practices repo again) gets all the subfolders of /terraform/providers/aws, and exposes them as different builds using namespaces. So our rake -T output would list the following tasks:

us_east_1_prod:init
us_east_1_prod:plan
us_east_1_prod:apply

us_east_1_staging:init
us_east_1_staging:plan
us_east_1_staging:apply

This separation prevents changes which might be exclusive to dev to accidentally affect (or worse, destroy) something in prod, as it's a different state file. It also allows testing a change in dev/staging before actually applying it to prod.

Also, I recently stumbled upon this little write up, which basically shows what might happen if you keep everything together: https://charity.wtf/2016/03/30/terraform-vpc-and-why-you-want-a-tfstate-file-per-env/


Paul's solution with modules is the right idea. However, I would strongly recommend against defining all of your environments (e.g. QA, staging, production) in the same Terraform file. If you do, then whenever you're making a change to staging, you risk accidentally breaking production too, which partially defeats the point of keeping those environments isolated in the first place! See Terraform, VPC, and why you want a tfstate file per env for a colorful discussion of what can go wrong.

I always recommend storing the Terraform code for each environment in a separate folder. In fact, you may even want to store the Terraform code for each "component" (e.g. a database, a VPC, a single app) in separate folders. Again, the reason is isolation: when making changes to a single app (which you might do 10 times per day), you don't want to put your entire VPC at risk (which you probably never change).

Therefore, my typical file layout looks something like this:

stage
  └ vpc
     └ main.tf
     └ vars.tf
     └ outputs.tf
  └ app
  └ db
prod
  └ vpc
  └ app
  └ db
global
  └ s3
  └ iam

All the Terraform code for the staging environment goes into the stage folder, all the code for the prod environment goes into the prod folder, and all the code that lives outside of an environment (e.g. IAM users, S3 buckets) goes into the global folder.

For more info, check out How to manage Terraform state. For a deeper look at Terraform best practices, check out the book Terraform: Up & Running.


Please note that from version 0.10.0 now Terraform supports the concept of Workspaces (environments in 0.9.x).

A workspace is a named container for Terraform state. With multiple workspaces, a single directory of Terraform configuration can be used to manage multiple distinct sets of infrastructure resources.

See more info here: https://www.terraform.io/docs/state/workspaces.html


As you scale up your terraform usage, you will need to share state (between devs, build processes and different projects), support multiple environments and regions. For this you need to use remote state. Before you execute your terraform you need to set up your state. (Im using powershell)

$environment="devtestexample"
$region     ="eu-west-1"
$remote_state_bucket = "${environment}-terraform-state"
$bucket_key = "yoursharedobject.$region.tfstate"

aws s3 ls "s3://$remote_state_bucket"|out-null
if ($lastexitcode)
{
   aws s3 mb "s3://$remote_state_bucket"
}

terraform remote config -backend S3 -backend-config="bucket=$remote_state_bucket"  -backend-config="key=$bucket_key" -backend-config="region=$region"
#(see here: https://www.terraform.io/docs/commands/remote-config.html)

terraform apply -var='environment=$environment' -var='region=$region'

Your state is now stored in S3, by region, by environment, and you can then access this state in other tf projects.