Amazon S3 direct file upload from client browser - private key disclosure

I think what you want is Browser-Based Uploads Using POST.

Basically, you do need server-side code, but all it does is generate signed policies. Once the client-side code has the signed policy, it can upload using POST directly to S3 without the data going through your server.

Here's the official doc links:

Diagram: http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingHTTPPOST.html

Example code: http://docs.aws.amazon.com/AmazonS3/latest/dev/HTTPPOSTExamples.html

The signed policy would go in your html in a form like this:

<html>
  <head>
    ...
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
    ...
  </head>
  <body>
  ...
  <form action="http://johnsmith.s3.amazonaws.com/" method="post" enctype="multipart/form-data">
    Key to upload: <input type="input" name="key" value="user/eric/" /><br />
    <input type="hidden" name="acl" value="public-read" />
    <input type="hidden" name="success_action_redirect" value="http://johnsmith.s3.amazonaws.com/successful_upload.html" />
    Content-Type: <input type="input" name="Content-Type" value="image/jpeg" /><br />
    <input type="hidden" name="x-amz-meta-uuid" value="14365123651274" />
    Tags for File: <input type="input" name="x-amz-meta-tag" value="" /><br />
    <input type="hidden" name="AWSAccessKeyId" value="AKIAIOSFODNN7EXAMPLE" />
    <input type="hidden" name="Policy" value="POLICY" />
    <input type="hidden" name="Signature" value="SIGNATURE" />
    File: <input type="file" name="file" /> <br />
    <!-- The elements after this will be ignored -->
    <input type="submit" name="submit" value="Upload to Amazon S3" />
  </form>
  ...
</html>

Notice the FORM action is sending the file directly to S3 - not via your server.

Every time one of your users wants to upload a file, you would create the POLICY and SIGNATURE on your server. You return the page to the user's browser. The user can then upload a file directly to S3 without going through your server.

When you sign the policy, you typically make the policy expire after a few minutes. This forces your users to talk to your server before uploading. This lets you monitor and limit uploads if you desire.

The only data going to or from your server is the signed URLs. Your secret keys stay secret on the server.


You can do this by AWS S3 Cognito try this link here :

http://docs.aws.amazon.com/AWSJavaScriptSDK/guide/browser-examples.html#Amazon_S3

Also try this code

Just change Region, IdentityPoolId and Your bucket name

<!DOCTYPE html>
<html>

<head>
    <title>AWS S3 File Upload</title>
    <script src="https://sdk.amazonaws.com/js/aws-sdk-2.1.12.min.js"></script>
</head>

<body>
    <input type="file" id="file-chooser" />
    <button id="upload-button">Upload to S3</button>
    <div id="results"></div>
    <script type="text/javascript">
    AWS.config.region = 'your-region'; // 1. Enter your region

    AWS.config.credentials = new AWS.CognitoIdentityCredentials({
        IdentityPoolId: 'your-IdentityPoolId' // 2. Enter your identity pool
    });

    AWS.config.credentials.get(function(err) {
        if (err) alert(err);
        console.log(AWS.config.credentials);
    });

    var bucketName = 'your-bucket'; // Enter your bucket name
    var bucket = new AWS.S3({
        params: {
            Bucket: bucketName
        }
    });

    var fileChooser = document.getElementById('file-chooser');
    var button = document.getElementById('upload-button');
    var results = document.getElementById('results');
    button.addEventListener('click', function() {

        var file = fileChooser.files[0];

        if (file) {

            results.innerHTML = '';
            var objKey = 'testing/' + file.name;
            var params = {
                Key: objKey,
                ContentType: file.type,
                Body: file,
                ACL: 'public-read'
            };

            bucket.putObject(params, function(err, data) {
                if (err) {
                    results.innerHTML = 'ERROR: ' + err;
                } else {
                    listObjs();
                }
            });
        } else {
            results.innerHTML = 'Nothing to upload.';
        }
    }, false);
    function listObjs() {
        var prefix = 'testing';
        bucket.listObjects({
            Prefix: prefix
        }, function(err, data) {
            if (err) {
                results.innerHTML = 'ERROR: ' + err;
            } else {
                var objKeys = "";
                data.Contents.forEach(function(obj) {
                    objKeys += obj.Key + "<br>";
                });
                results.innerHTML = objKeys;
            }
        });
    }
    </script>
</body>

</html>
For more details, Please check - Github

You're saying you want a "serverless" solution. But that means you have no ability to put any of "your" code in the loop. (NOTE: Once you give your code to a client, it's "their" code now.) Locking down CORS is not going to help: People can easily write a non-web-based tool (or a web-based proxy) that adds the correct CORS header to abuse your system.

The big problem is that you can't differentiate between the different users. You can't allow one user to list/access his files, but prevent others from doing so. If you detect abuse, there is nothing you can do about it except change the key. (Which the attacker can presumably just get again.)

Your best bet is to create an "IAM user" with a key for your javascript client. Only give it write access to just one bucket. (but ideally, do not enable the ListBucket operation, that will make it more attractive to attackers.)

If you had a server (even a simple micro instance at $20/month), you could sign the keys on your server while monitoring/preventing abuse in realtime. Without a server, the best you can do is periodically monitor for abuse after-the-fact. Here's what I would do:

1) periodically rotate the keys for that IAM user: Every night, generate a new key for that IAM user, and replace the oldest key. Since there are 2 keys, each key will be valid for 2 days.

2) enable S3 logging, and download the logs every hour. Set alerts on "too many uploads" and "too many downloads". You will want to check both total file size and number of files uploaded. And you will want to monitor both the global totals, and also the per-IP address totals (with a lower threshold).

These checks can be done "serverless" because you can run them on your desktop. (i.e. S3 does all the work, these processes just there to alert you to abuse of your S3 bucket so you don't get a giant AWS bill at the end of the month.)


Adding more info to the accepted answer, you can refer to my blog to see a running version of the code, using AWS Signature version 4.

Will summarize here:

As soon as the user selects a file to be uploaded, do the followings: 1. Make a call to the web server to initiate a service to generate required params

  1. In this service, make a call to AWS IAM service to get temporary cred

  2. Once you have the cred, create a bucket policy (base 64 encoded string). Then sign the bucket policy with the temporary secret access key to generate final signature

  3. send the necessary parameters back to the UI

  4. Once this is received, create a html form object, set the required params and POST it.

For detailed info, please refer https://wordpress1763.wordpress.com/2016/10/03/browser-based-upload-aws-signature-version-4/


To create a signature, I must use my secret key. But all things happens on a client side, so, the secret key can be easily revealed from page source (even if I obfuscate/encrypt my sources).

This is where you have misunderstood. The very reason digital signatures are used is so that you can verify something as correct without revealing your secret key. In this case the digital signature is used to prevent the user from modifying the policy you set for the form post.

Digital signatures such as the one here are used for security all around the web. If someone (NSA?) really were able to break them, they would have much bigger targets than your S3 bucket :)