Initial setup of terraform backend using terraform

I'm just getting started with terraform and I'd like to be able to use AWS S3 as my backend for storing the state of my projects.

terraform {
    backend "s3" {
      bucket = "tfstate"
      key = "app-state"
      region = "us-east-1"
    }
}

I feel like it is sensible to setup my S3 bucket, IAM groups and polices for the backend storage infrastructure with terraform as well.

If I setup my backend state before I apply my initial terraform infrastructure, it reasonably complains that the backend bucket is not yet created. So, my question becomes, how do I setup my terraform backend with terraform, while keeping my state for the backend tracked by terraform. Seems like a nested dolls problem.

I have some thoughts about how to script around this, for example, checking to see if the bucket exists or some state has been set, then bootstrapping terraform and finally copying the terraform tfstate up to s3 from the local file system after the first run. But before going down this laborious path, I thought I'd make sure I wasn't missing something obvious.


To set this up using terraform remote state, I usually have a separate folder called remote-state within my dev and prod terraform folder.

The following main.tf file will set up your remote state for what you posted:

provider "aws" {
  region = "us-east-1"
}

resource "aws_s3_bucket" "terraform_state" {
  bucket = "tfstate"

  versioning {
    enabled = true
  }

  lifecycle {
    prevent_destroy = true
  }
}

resource "aws_dynamodb_table" "terraform_state_lock" {
  name           = "app-state"
  read_capacity  = 1
  write_capacity = 1
  hash_key       = "LockID"

  attribute {
    name = "LockID"
    type = "S"
  }
}

Then get into this folder using cd remote-state, and run terraform init && terraform apply - this should only need to be run once. You might add something to bucket and dynamodb table name to separate your different environments.


Building on the great contribution from Austin Davis, here is a variation that I use which includes a requirement for data encryption:

provider "aws" {
  region = "us-east-1"
}

resource "aws_s3_bucket" "terraform_state" {
  bucket = "tfstate"

  versioning {
    enabled = true
  }

  lifecycle {
    prevent_destroy = true
  }
}

resource "aws_dynamodb_table" "terraform_state_lock" {
  name           = "app-state"
  read_capacity  = 1
  write_capacity = 1
  hash_key       = "LockID"

  attribute {
    name = "LockID"
    type = "S"
  }
}

resource "aws_s3_bucket_policy" "terraform_state" {
  bucket = "${aws_s3_bucket.terraform_state.id}"
  policy =<<EOF
{
  "Version": "2012-10-17",
  "Id": "RequireEncryption",
   "Statement": [
    {
      "Sid": "RequireEncryptedTransport",
      "Effect": "Deny",
      "Action": ["s3:*"],
      "Resource": ["arn:aws:s3:::${aws_s3_bucket.terraform_state.bucket}/*"],
      "Condition": {
        "Bool": {
          "aws:SecureTransport": "false"
        }
      },
      "Principal": "*"
    },
    {
      "Sid": "RequireEncryptedStorage",
      "Effect": "Deny",
      "Action": ["s3:PutObject"],
      "Resource": ["arn:aws:s3:::${aws_s3_bucket.terraform_state.bucket}/*"],
      "Condition": {
        "StringNotEquals": {
          "s3:x-amz-server-side-encryption": "AES256"
        }
      },
      "Principal": "*"
    }
  ]
}
EOF
}

As you've discovered, you can't use terraform to build the components terraform needs in the first place.

While I understand the inclination to have terraform "track everything", it is very difficult, and more headache than it's worth.

I generally handle this situation by creating a simple bootstrap shell script. It creates things like:

  1. The s3 bucket for state storage
  2. Adds versioning to said bucket
  3. a terraform IAM user and group with certain policies I'll need for terraform builds

While you should only need to run this once (technically), I find that when I'm developing a new system, I spin up and tear things down repeatedly. So having those steps in one script makes that a lot simpler.

I generally build the script to be idempotent. This way, you can run it multiple times without concern that you're creating duplicate buckets, users, etc