How to get contents of a text file from AWS s3 using a lambda function?
I was wondering if I could set up a lambda function for AWS, triggered whenever a new text file is uploaded into an s3 bucket. In the function, I would like to get the contents of the text file and process it somehow. I was wondering if this was possible...?
For example, if I upload foo.txt, with contents foobarbaz, I would like to somehow get foobarbaz in my lambda function so I can do stuff with it. I know I can get metadata from getObject, or a similar method.
Thanks!
Solution 1:
The S3 object key and bucket name are passed into your Lambda function via the event parameter. You can then get the object from S3 and read its contents.
Basic code to retrieve bucket and object key from the Lambda event
is as follows:
exports.handler = function(event, context, callback) {
const bkt = event.Records[0].s3.bucket.name;
const key = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, ' '));
};
Once you have the bucket and key, you can call getObject to retrieve the object:
const AWS = require('aws-sdk');
const s3 = new AWS.S3();
exports.handler = function(event, context, callback) {
// Retrieve the bucket & key for the uploaded S3 object that
// caused this Lambda function to be triggered
const Bucket = event.Records[0].s3.bucket.name;
const Key = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, ' '));
// Retrieve the object
s3.getObject({ Bucket, Key }, function(err, data) {
if (err) {
console.log(err, err.stack);
callback(err);
} else {
console.log("Raw text:\n" + data.Body.toString('ascii'));
callback(null, null);
}
});
};
Here's an updated JavaScript example using ES6-style code and promises, minus error-handling:
const AWS = require('aws-sdk');
const s3 = new AWS.S3();
exports.handler = async (event, context) => {
const Bucket = event.Records[0].s3.bucket.name;
const Key = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, ' '));
const data = await s3.getObject({ Bucket, Key }).promise();
console.log("Raw text:\n" + data.Body.toString('ascii'));
};
A number of posters have asked for the equivalent in Java, so here's an example:
package example;
import java.net.URLDecoder;
import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.RequestHandler;
import com.amazonaws.services.lambda.runtime.events.S3Event;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3Client;
import com.amazonaws.services.s3.event.S3EventNotification.S3EventNotificationRecord;
public class S3GetTextBody implements RequestHandler<S3Event, String> {
public String handleRequest(S3Event s3event, Context context) {
try {
S3EventNotificationRecord record = s3event.getRecords().get(0);
// Retrieve the bucket & key for the uploaded S3 object that
// caused this Lambda function to be triggered
String bkt = record.getS3().getBucket().getName();
String key = record.getS3().getObject().getKey().replace('+', ' ');
key = URLDecoder.decode(key, "UTF-8");
// Read the source file as text
AmazonS3 s3Client = new AmazonS3Client();
String body = s3Client.getObjectAsString(bkt, key);
System.out.println("Body: " + body);
return "ok";
} catch (Exception e) {
System.err.println("Exception: " + e);
return "error";
}
}
}
Solution 2:
You can use data.Body.toString('ascii')
to get the contents of the text file, assuming that the text file was encoded used ascii format. You can also pass other encoding types to the function. Check out Node-Buffer for further details.
Solution 3:
I am using lambda function with a python 3.6 environment. The code below will read the contents of a file main.txt inside bucket my_s3_bucket. Make sure to replace name of bucket and file name according to your needs.
def lambda_handler(event, context):
# TODO implement
import boto3
s3 = boto3.client('s3')
data = s3.get_object(Bucket='my_s3_bucket', Key='main.txt')
contents = data['Body'].read()
print(contents)