Using moviepy, scipy and numpy in amazon lambda

I'd like to generate video using AWS Lambda feature.

I've followed instructions found here and here.

And I now have the following process to build my Lambda function:

Step 1

Fire a Amazon Linux EC2 instance and run this as root on it:

#! /usr/bin/env bash

# Install the SciPy stack on Amazon Linux and prepare it for AWS Lambda

yum -y update
yum -y groupinstall "Development Tools"
yum -y install blas --enablerepo=epel
yum -y install lapack --enablerepo=epel
yum -y install atlas-sse3-devel --enablerepo=epel
yum -y install Cython --enablerepo=epel
yum -y install python27
yum -y install python27-numpy.x86_64
yum -y install python27-numpy-f2py.x86_64
yum -y install python27-scipy.x86_64

/usr/local/bin/pip install --upgrade pip
mkdir -p /home/ec2-user/stack
/usr/local/bin/pip install moviepy -t /home/ec2-user/stack

cp -R /usr/lib64/python2.7/dist-packages/numpy /home/ec2-user/stack/numpy
cp -R /usr/lib64/python2.7/dist-packages/scipy /home/ec2-user/stack/scipy

tar -czvf stack.tgz /home/ec2-user/stack/*

Step 2

I scp the resulting tarball to my laptop. And then run this script to build a zip archive.

#! /usr/bin/env bash

mkdir tmp
rm lambda.zip
tar -xzf stack.tgz -C tmp

zip -9 lambda.zip process_movie.py
zip -r9 lambda.zip *.ttf
cd tmp/home/ec2-user/stack/
zip -r9 ../../../../lambda.zip *

process_movie.py script is at the moment only a test to see if the stack is ok:

def make_movie(event, context):
    import os
    print(os.listdir('.'))
    print(os.listdir('numpy'))
    try:
        import scipy
    except ImportError:
        print('can not import scipy')

    try:
        import numpy
    except ImportError:
        print('can not import numpy')

    try:
        import moviepy
    except ImportError:
        print('can not import moviepy')

Step 3

Then I upload the resulting archive to S3 to be the source of my lambda function. When I test the function I get the following callstack:

START RequestId: 36c62b93-b94f-11e5-9da7-83f24fc4b7ca Version: $LATEST
['tqdm', 'imageio-1.4.egg-info', 'decorator.pyc', 'process_movie.py', 'decorator-4.0.6.dist-info', 'imageio', 'moviepy', 'tqdm-3.4.0.dist-info', 'scipy', 'numpy', 'OpenSans-Regular.ttf', 'decorator.py', 'moviepy-0.2.2.11.egg-info']
['add_newdocs.pyo', 'numarray', '__init__.py', '__config__.pyc', '_import_tools.py', 'setup.pyo', '_import_tools.pyc', 'doc', 'setupscons.py', '__init__.pyc', 'setup.py', 'version.py', 'add_newdocs.py', 'random', 'dual.pyo', 'version.pyo', 'ctypeslib.pyc', 'version.pyc', 'testing', 'dual.pyc', 'polynomial', '__config__.pyo', 'f2py', 'core', 'linalg', 'distutils', 'matlib.pyo', 'tests', 'matlib.pyc', 'setupscons.pyc', 'setup.pyc', 'ctypeslib.py', 'numpy', '__config__.py', 'matrixlib', 'dual.py', 'lib', 'ma', '_import_tools.pyo', 'ctypeslib.pyo', 'add_newdocs.pyc', 'fft', 'matlib.py', 'setupscons.pyo', '__init__.pyo', 'oldnumeric', 'compat']
can not import scipy
'module' object has no attribute 'core': AttributeError
Traceback (most recent call last):
  File "/var/task/process_movie.py", line 91, in make_movie
    import numpy
  File "/var/task/numpy/__init__.py", line 122, in <module>
    from numpy.__config__ import show as show_config
  File "/var/task/numpy/numpy/__init__.py", line 137, in <module>
    import add_newdocs
  File "/var/task/numpy/numpy/add_newdocs.py", line 9, in <module>
    from numpy.lib import add_newdoc
  File "/var/task/numpy/lib/__init__.py", line 13, in <module>
    from polynomial import *
  File "/var/task/numpy/lib/polynomial.py", line 11, in <module>
    import numpy.core.numeric as NX
AttributeError: 'module' object has no attribute 'core'

END RequestId: 36c62b93-b94f-11e5-9da7-83f24fc4b7ca
REPORT RequestId: 36c62b93-b94f-11e5-9da7-83f24fc4b7ca  Duration: 112.49 ms Billed Duration: 200 ms     Memory Size: 1536 MB    Max Memory Used: 14 MB

I cant understand why python does not found the core directory that is present in the folder structure.

EDIT:

Following @jarmod advice I've reduced the lambdafunction to:

def make_movie(event, context):
    print('running make movie')
    import numpy

I now have the following error:

START RequestId: 6abd7ef6-b9de-11e5-8aee-918ac0a06113 Version: $LATEST
running make movie
Error importing numpy: you should not try to import numpy from
        its source directory; please exit the numpy source tree, and relaunch
        your python intepreter from there.: ImportError
Traceback (most recent call last):
  File "/var/task/process_movie.py", line 3, in make_movie
    import numpy
  File "/var/task/numpy/__init__.py", line 127, in <module>
    raise ImportError(msg)
ImportError: Error importing numpy: you should not try to import numpy from
        its source directory; please exit the numpy source tree, and relaunch
        your python intepreter from there.

END RequestId: 6abd7ef6-b9de-11e5-8aee-918ac0a06113
REPORT RequestId: 6abd7ef6-b9de-11e5-8aee-918ac0a06113  Duration: 105.95 ms Billed Duration: 200 ms     Memory Size: 1536 MB    Max Memory Used: 14 MB

I was also following your first link and managed to import numpy and pandas in a Lambda function this way (on Windows):

  1. Started a (free-tier) t2.micro EC2 instance with 64-bit Amazon Linux AMI 2015.09.1 and used Putty to SSH in.
  2. Tried the same commands you used and the one recommended by the Amazon article:

    sudo yum -y update
    sudo yum -y upgrade
    sudo yum -y groupinstall "Development Tools"
    sudo yum -y install blas --enablerepo=epel
    sudo yum -y install lapack --enablerepo=epel
    sudo yum -y install Cython --enablerepo=epel
    sudo yum install python27-devel python27-pip gcc
    
  3. Created the virtual environment:

    virtualenv ~/env
    source ~/env/bin/activate
    
  4. Installed the packages:

    sudo ~/env/bin/pip2.7 install numpy
    sudo ~/env/bin/pip2.7 install pandas
    
  5. Then, using WinSCP, I logged in and downloaded everything (except _markerlib, pip*, pkg_resources, setuptools* and easyinstall*) from /home/ec2-user/env/lib/python2.7/dist-packages, and everything from /home/ec2-user/env/lib64/python2.7/site-packages from the EC2 instance.

  6. I put all these folders and files into one zip, along with the .py file containing the Lambda function. illustration of all files copied

  7. Because this .zip is larger than 10 MB, I created an S3 bucket to store the file. I copied the link of the file from there and pasted at "Upload a .ZIP from Amazon S3" at the Lambda function.

  8. The EC2 instance can be shut down, it's not needed any more.

With this, I could import numpy and pandas. I'm not familiar with moviepy, but scipy might already be tricky as Lambda has a limit for unzipped deployment package size at 262 144 000 bytes. I'm afraid numpy and scipy together are already over that.


With the help of all posts in this thread here is a solution for the records:

To get this to work you'll need to:

  1. start a EC2 instance with at least 2GO RAM (to be able to compile NumPy & SciPy)

  2. Install the needed dependencies

    sudo yum -y update
    sudo yum -y upgrade
    sudo yum -y groupinstall "Development Tools"
    sudo yum -y install blas --enablerepo=epel
    sudo yum -y install lapack --enablerepo=epel
    sudo yum -y install Cython --enablerepo=epel
    sudo yum install python27-devel python27-pip gcc
    virtualenv ~/env
    source ~/env/bin/activate
    pip install scipy
    pip install numpy
    pip install moviepy
    
  3. Copy to your locale machine all the content of the directories (except _markerlib, pip*, pkg_resources, setuptools* and easyinstall*) in a stack folder:

    • home/ec2-user/env/lib/python2.7/dist-packages
    • home/ec2-user/env/lib64/python2.7/dist-packages
  4. get all required shared libraries from you EC2instance:

    • libatlas.so.3
    • libf77blas.so.3
    • liblapack.so.3
    • libptf77blas.so.3
    • libcblas.so.3
    • libgfortran.so.3
    • libptcblas.so.3
    • libquadmath.so.0
  5. Put them in a lib subfolder of the stack folder

  6. imageio is a dependency of moviepy, you'll need to download some binary version of its dependencies: libfreeimage and of ffmpeg; they can be found here. Put them at the root of your stack folder and rename libfreeimage-3.16.0-linux64.soto libfreeimage.so

  7. You should now have a stack folder containing:

    • all python dependencies at root
    • all shared libraries in a lib subfolder
    • ffmpeg binary at root
    • libfreeimage.so at root
  8. Zip this folder: zip -r9 stack.zip . -x ".*" -x "*/.*"

  9. Use the following lambda_function.py as an entry point for your lambda

    from __future__ import print_function
    
    import os
    import subprocess
    
    SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__))
    LIB_DIR = os.path.join(SCRIPT_DIR, 'lib')
    FFMPEG_BINARY = os.path.join(SCRIPT_DIR, 'ffmpeg')
    
    
    def lambda_handler(event, context):
        command = 'LD_LIBRARY_PATH={} IMAGEIO_FFMPEG_EXE={} python movie_maker.py'.format(
            LIB_DIR,
            FFMPEG_BINARY,
        )
        try:
            output = subprocess.check_output(command, shell=True)
            print(output)
        except subprocess.CalledProcessError as e:
            print(e.output)
    
  10. write a movie_maker.pyscript that depends on moviepy, numpy, ...

  11. add those to script to your stack.zip file zip -r9 lambda.zip *.py

  12. upload the zip to S3 and use it as a source for your lambda

You can also download the stack.zip here.


The posts here help me to find a way to statically compile NumPy with libraries files that can be included in the AWS Lambda Deployment package. This solution does not depend on LD_LIBRARY_PATH value as in @rouk1 solution.

The compiled NumPy library can be downloaded from https://github.com/vitolimandibhrata/aws-lambda-numpy

Here are the steps to custom compile NumPy

Instructions on compiling this package from scratch

Prepare a fresh AWS EC instance with AWS Linux.

Install compiler dependencies

sudo yum -y install python-devel
sudo yum -y install gcc-c++
sudo yum -y install gcc-gfortran
sudo yum -y install libgfortran

Install NumPy dependencies

sudo yum -y install blas
sudo yum -y install lapack
sudo yum -y install atlas-sse3-devel

Create /var/task/lib to contain the runtime libraries

mkdir -p /var/task/lib

/var/task is the root directory where your code will reside in AWS Lambda thus we need to statically link the required library files in a well known folder which in this case /var/task/lib

Copy the following library files to the /var/task/lib

cp /usr/lib64/atlas-sse3/liblapack.so.3 /var/task/lib/.
cp /usr/lib64/atlas-sse3/libptf77blas.so.3 /var/task/lib/.
cp /usr/lib64/atlas-sse3/libf77blas.so.3 /var/task/lib/.
cp /usr/lib64/atlas-sse3/libptcblas.so.3 /var/task/lib/.
cp /usr/lib64/atlas-sse3/libcblas.so.3 /var/task/lib/.
cp /usr/lib64/atlas-sse3/libatlas.so.3 /var/task/lib/.
cp /usr/lib64/atlas-sse3/libptf77blas.so.3 /var/task/lib/.
cp /usr/lib64/libgfortran.so.3 /var/task/lib/.
cp /usr/lib64/libquadmath.so.0 /var/task/lib/.

Get the latest numpy source code from http://sourceforge.net/projects/numpy/files/NumPy/

Go to the numpy source code folder e.g numpy-1.10.4 Create a site.cfg file with the following entries

[atlas]
libraries=lapack,f77blas,cblas,atlas
search_static_first=true
runtime_library_dirs = /var/task/lib
extra_link_args = -lgfortran -lquadmath

-lgfortran -lquadmath flags are required to statically link gfortran and quadmath libraries with files defined in runtime_library_dirs

Build NumPy

python setup.py build

Install NumPy

python setup.py install

Check whether the libraries are linked to the files in /var/task/lib

ldd $PYTHON_HOME/lib64/python2.7/site-packages/numpy/linalg/lapack_lite.so

You should see

linux-vdso.so.1 =>  (0x00007ffe0dd2d000)
liblapack.so.3 => /var/task/lib/liblapack.so.3 (0x00007ffad6be5000)
libptf77blas.so.3 => /var/task/lib/libptf77blas.so.3 (0x00007ffad69c7000)
libptcblas.so.3 => /var/task/lib/libptcblas.so.3 (0x00007ffad67a7000)
libatlas.so.3 => /var/task/lib/libatlas.so.3 (0x00007ffad6174000)
libf77blas.so.3 => /var/task/lib/libf77blas.so.3 (0x00007ffad5f56000)
libcblas.so.3 => /var/task/lib/libcblas.so.3 (0x00007ffad5d36000)
libpython2.7.so.1.0 => /usr/lib64/libpython2.7.so.1.0 (0x00007ffad596d000)
libgfortran.so.3 => /var/task/lib/libgfortran.so.3 (0x00007ffad5654000)
libm.so.6 => /lib64/libm.so.6 (0x00007ffad5352000)
libquadmath.so.0 => /var/task/lib/libquadmath.so.0 (0x00007ffad5117000)
libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00007ffad4f00000)
libc.so.6 => /lib64/libc.so.6 (0x00007ffad4b3e000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x00007ffad4922000)
libdl.so.2 => /lib64/libdl.so.2 (0x00007ffad471d000)
libutil.so.1 => /lib64/libutil.so.1 (0x00007ffad451a000)
/lib64/ld-linux-x86-64.so.2 (0x000055cfc3ab8000)

Another, very simple method that's possible these days is to build using the awesome docker containers that LambCI made to mimic Lambda: https://github.com/lambci/docker-lambda

The lambci/lambda:build container resembles AWS Lambda with the addition of a mostly-complete build environment. To start a shell session in it:

docker run -v "$PWD":/var/task -it lambci/lambda:build bash

Inside the session:

export share=/var/task
easy_install pip
pip install -t $share numpy

Or, with virtualenv:

export share=/var/task
export PS1="[\u@\h:\w]\$ " # required by virtualenv
easy_install pip
pip install virtualenv
# ... make the venv, install numpy, and copy it to $share

Later on you can use the main lambci/lambda container to test your build.