How do I install pyspark for use in standalone scripts?
Spark-2.2.0 onwards use pip install pyspark
to install pyspark in your machine.
For older versions refer following steps. Add Pyspark lib in Python path in the bashrc
export PYTHONPATH=$SPARK_HOME/python/:$PYTHONPATH
also don't forget to set up the SPARK_HOME. PySpark depends the py4j Python package. So install that as follows
pip install py4j
For more details about stand alone PySpark application refer this post
I install pyspark for use in standalone following a guide. The steps are:
export SPARK_HOME="/opt/spark"
export PYTHONPATH=$SPARK_HOME/python/:$PYTHONPATH
Then you need install py4j:
pip install py4j
To try it:
./bin/spark-submit --master local[8] <python_file.py>
As of Spark 2.2, PySpark is now available in PyPI. Thanks @Evan_Zamir.
pip install pyspark
As of Spark 2.1, you just need to download Spark and run setup.py:
cd my-spark-2.1-directory/python/
python setup.py install # or pip install -e .
There is also a ticket for adding it to PyPI.
You can set the PYTHONPATH manually as you suggest, and this may be useful to you when testing stand-alone non-interactive scripts on a local installation.
However, (py)spark is all about distributing your jobs to nodes on clusters. Each cluster has a configuration defining a manager and many parameters; the details of setting this up are here, and include a simple local cluster (this may be useful for testing functionality).
In production, you will be submitting tasks to spark via spark-submit, which will distribute your code to the cluster nodes, and establish the context for them to run within on those nodes. You do, however, need to make sure that the python installations on the nodes have all the required dependencies (the recommended way) or that the dependencies are passed along with your code (I don't know how that works).