Scripting an sFTP batch upload every 1 minute

Solution 1:

My first tip would be to name the files using the date and time they were taken. That way you won't need to keep a counter anywhere, which would be difficult in a script which doesn't run continuously as its variables would get reset on each invocation. You could store the variables in files, but it's easier if you ensure the names won't collide. Something like wget http://127.0.0.1:8080/?action=snapshot -O "Snapshot-$(date).jpg" if you are using Bash. (Sorry if the syntax doesn't work, I'm no Bash expert and I'm typing this in my phone.)

Like you mentioned, there are several tutorials about scripting FTP uploads available. At least one of them should have included an example which uploads files by a pattern, such as "Snapshot-*.jpg", where the wildcard would match the timestamp. Or, you could point the FTP program (such as lftp or ncftp, which have binaries meant for scripting) to upload everything in a certain folder. Then wipe the folder if the program succeeded. That way you can run your script as often as you want using cron or a systemd timer, and have it be flexible enough to always try to upload any files which it didn't succeed with the later time it ran.

There's also software designed to do this task, and more, on their own. One such programs, which I've used myself, is simply called "motion" and is available for most distributions. It has built-in motion triggering (record and/or take snapshots) or continuous modes. It can be a bit CPU-intensive on systems like a Raspberry-Pi, but it certainly works.

If you want to step it up a bit, perhaps run multiple remote/local cameras, and have the motion detection offloaded to a more powerful central machine, look at Zoneminder. It takes longer to set up, and is in my experience picky about you manually setting the correct resolutions on your camera feeds, but it can be scripted to some degree.

Solution 2:

I would use AWS S3 instead of an FTP server in EC2, and the AWS CLI tool to upload the files. It is a much lighter solution requiring no systems administration. S3 provides much more durable storage than the volumes for EC2.

Tool download: https://aws.amazon.com/cli/

Relevant docs: http://docs.aws.amazon.com/cli/latest/reference/s3/

You can create a user that can only upload to the S3 bucket using IAM (so the criminals cant erase the files!)

I would accomplish this task by making a bash (or perl, node.js, ruby, powershell?, ...) script that calls wget and outputs to a filename with the datetime. Call aws s3 cp ... in a for loop to upload all of the files in the folder. In the loop, upon each successful aws s3 cp call for each file, move it to an archive folder to be saved locally as well. If you don't want a local archive use aws s3 mv to auto-magically purge the things that have already been uploaded.

Solution 3:

Gents - big thanks to all that have helped. In part, all of your suggestions have helped me get to the finished result. So I've given you all credit for the replies but have posted my own answer below in the hope it is useful for others. I realise that is not generally the done thing, but in this case there's many areas to form the solution, so I've tied it all into one below.

Install the services needed to use AWS S3

# > sudo apt-get install python-pip
# > sudo pip install awscli

Sign up for AWS S3 Service with your own Amazon account : https://aws.amazon.com/s3/

Define new access key for your user account via 'Access Keys --> Create New Access Key' and download the CSV file when promted. If you don't do this, you won't be able to use the command line S3 functions : https://console.aws.amazon.com/iam/home?#security_credential

Open the ROOTKEY.CSV file and copy and then paste the contained AccessKeyID value and the SecretKey value when prompted when you launch 'aws configure', which you launch from the command line before using AWS with Linux.

> aws configure
Enter your access key and secret key when asked. You can leave the third and fourth empty or as 'None'. 

Test you can connect and upload a file with a sample.txt file : > aws s3 mv ~/SourceFolder/sample.txt s3://NameOfYourAWSS3Bucket/AFolderYouHaveCreated

Download and install mjpg_streamer following the build instructions here : https://github.com/jacksonliam/mjpg-streamer#building--installation Once done, navigate to its folder

> cd mjpg_streamer

Start the mjpg streamer :

> mjpg_streamer -i "./input_uvc.so -f 15 -r 1280x960" -o "./output_http.so -w ./www"

Check it is running by visiting the following link in your web browser :

http://127.0.0.1:8080/stream.html

Take a single date and time stamped file (and save it to the local dir from which it is being executed) with :

> wget http://127.0.0.1:8080/?action=snapshot -O output-$(date +"%Y-%m-%d-%k-%M-%S").jpg

This will create a file in the 'www' sub folder of your mjpeg_streamer folder, called 'output-16-09-01-22-35-30.jpg' if executed at 22:35 on Sept 1st 2016.

Create a new bash script (such as MyScript.sh) and give executable permissions to it and copy the content at the bottom into it. When run, it will create a timestamped JPEG every 5 seconds until the current date becomes the specified end date. In this case, it starts on date A and ends on Date B. Substitue your own dates.

Copy this into the script, substituing relevant paths :

#!/bin/bash  
     SOURCE="/home/YourUser/YourSourceFolder"
     DESTINATION="s3://YourS3Bucket/DestinationFolder"
     input_start=2016-8-29
     input_end=2016-9-9
     startdate=$(date -I -d "$input_start") || exit -1
     enddate=$(date -I -d "$input_end")     || exit -1

     d="$startdate"

       while [ "$d" != "$enddate" ]; do 
       sleep 5
         wget "http://127.0.0.1:8080/?action=snapshot" -O output-$(date +"%Y-%m-%d-%k-%M-%S")'.jpg';
         aws s3 mv $SOURCE $DESTINATION --recursive;
       done

Suggestions for improvements welcome.

Also, you can check on the progress of your storage in the AWS S3 with

aws s3 ls s3://yourbucketname --recursive --human-readable --summarize

I left it for two hours firing every 10 seconds and it generated 74Mb of uploads. So I work that out to be 6.5Gb for a week - less than the pricing tier for the service where the costs kick in, which I think is 8Gb.

Thanks again.