AWS upload folder to S3 as tar.gz without compressing locally

What you're really looking for is not saving a local file. You can use pipes to send the data from tar through gzip to s3 without saving anything to disk.

tar c /var/test | gzip | aws s3 cp - "s3://tests/test1.tar.gz"

Breaking this down (where stdin and stdout refer to the standard input/output streams via the pipeline):

  • tar c /var/test creates a tar archive out of /var/test and outputs it to stdout...
  • ...which is read by gzip from stdin, and the gzipped file (.tar.gz) is output to stdout...
  • ...which is read by aws s3 cp - "s3://tests/test1.tar.gz" from stdin and sent to S3. The - tells the AWS CLI to copy from stdin.

This still performs the gzip operation locally, but does not require the creation of a temporary file, since the entire stream is sent straight over the network.


tar cvfz - /var/test | aws s3 cp - s3://tests/test1.tar.gz

You don't have to separately gzip; tar does that for you with the z option.

This works both in directions.