s3cmd fails too many times

It used to be my favorite backup transport agent but now I frequently get this result from s3cmd on the very same Ubuntu server/network:

root@server:/home/backups# s3cmd put bkup.tgz s3://mybucket/
bkup.tgz -> s3://mybucket/bkup.tgz  [1 of 1]
      36864 of 2711541519     0% in    1s    20.95 kB/s  failed
WARNING: Upload failed: /bkup.tgz ([Errno 32] Broken pipe)
WARNING: Retrying on lower speed (throttle=0.00)
WARNING: Waiting 3 sec...
bkup.tgz -> s3://mybucket/bkup.tgz  [1 of 1]
      36864 of 2711541519     0% in    1s    23.96 kB/s  failed
WARNING: Upload failed: /bkup.tgz ([Errno 32] Broken pipe)
WARNING: Retrying on lower speed (throttle=0.01)
WARNING: Waiting 6 sec...
bkup.tgz -> s3://mybucket/bkup.tgz  [1 of 1]
      28672 of 2711541519     0% in    1s    18.71 kB/s  failed
WARNING: Upload failed: /bkup.tgz ([Errno 32] Broken pipe)
WARNING: Retrying on lower speed (throttle=0.05)
WARNING: Waiting 9 sec...
bkup.tgz -> s3://mybucket/bkup.tgz  [1 of 1]
      28672 of 2711541519     0% in    1s    18.86 kB/s  failed
WARNING: Upload failed: /bkup.tgz ([Errno 32] Broken pipe)
WARNING: Retrying on lower speed (throttle=0.25)
WARNING: Waiting 12 sec...
bkup.tgz -> s3://mybucket/bkup.tgz  [1 of 1]
      28672 of 2711541519     0% in    1s    15.79 kB/s  failed
WARNING: Upload failed: /bkup.tgz ([Errno 32] Broken pipe)
WARNING: Retrying on lower speed (throttle=1.25)
WARNING: Waiting 15 sec...
bkup.tgz -> s3://mybucket/bkup.tgz  [1 of 1]
      12288 of 2711541519     0% in    2s     4.78 kB/s  failed
ERROR: Upload of 'bkup.tgz' failed too many times. Skipping that file.

This happens even for files as small as 100MB, so I suppose it's not a size issue. It also happens when I use put with --acl-private flag (s3cmd version 1.0.1)

I appreciate if you suggest some solution or a lightweight alternative to s3cmd.


Solution 1:

There are a few common problems that result in s3cmd returning the error you mention:

  • A non-existent (e.g. mistyped bucket name; or a bucket that hasn't yet been provisioned)
  • Trailing spaces on your authentication values (key/id)
  • An inaccurate system clock. It is possible to use Wireshark (over an http - not https connection) to see how your system clock lines up with S3's clock - they should match within a few seconds. Consider using NTP to sync your clock if this is an issue.

Alternatives to s3cmd:

  • s3cp - a Java based script that offers good functionality for transferring files to S3, and more verbose error messages than s3cmd
  • aws - a Perl based script, written by Tim Kay, that provides easy access to most AWS (including S3) functions, and is quite popular.

If you wish to write your own script, you can use the Python Boto library which has functions for performing most AWS operations and has many examples available online. There is a project which exposes some of the boto functions on the command line - although, a very small set of functions are currently available.

Solution 2:

This helped in my case:

  1. do s3cmd ls on the bucket
  2. it printed a warning about a redirection
  3. replace the bucket_host in the .s3cfg file with the one from the warning.
  4. repeat s3cmd ls, it should no longer print a warning
  5. reupload file

my .s3cfg now is:

host_bucket = %(bucket)s.s3-external-3.amazonaws.com

Solution 3:

I had the same problem with the Ubuntu s3cmd command.

Downloading the latest stable version (1.0.1) solved it: http://sourceforge.net/projects/s3tools/files/s3cmd/

Solution 4:

After having tried all the things above, I noticed I'm still having the throttling issue using s3cmd put, but not using s3cmd sync instead. Hope this might be useful to somebody for a quick fix :)

Solution 5:

I had the same problem and found a solution here in response by samwise.

This problem appeared when I started experiments with IAM. In my case the problem was in ARN. I listed arn:aws:s3:::bucketname instead of arn:aws:s3:::bucketname/*

That's why I had no problems with $ s3cmd ls s://bucketname, but could not upload any file there((