Installing of SparkR
I have the last version of R - 3.2.1. Now I want to install SparkR on R. After I execute:
> install.packages("SparkR")
I got back:
Installing package into ‘/home/user/R/x86_64-pc-linux-gnu-library/3.2’
(as ‘lib’ is unspecified)
Warning in install.packages :
package ‘SparkR’ is not available (for R version 3.2.1)
I have also installed Spark on my machine
Spark 1.4.0
How I can solve this problem?
Solution 1:
You can install directly from a GitHub repository:
if (!require('devtools')) install.packages('devtools')
devtools::install_github('apache/[email protected]', subdir='R/pkg')
You should choose tag (v2.x.x
above) corresponding to the version of Spark you use. You can find a full list of tags on the project page or directly from R using GitHub API:
jsonlite::fromJSON("https://api.github.com/repos/apache/spark/tags")$name
If you've downloaded binary package from a downloads page R library is in a R/lib/SparkR
subdirectory. It can be used to install SparkR
directly. For example:
$ export SPARK_HOME=/path/to/spark/directory
$ cd $SPARK_HOME/R/pkg/
$ R -e "devtools::install('.')"
You can also add R lib to .libPaths
(taken from here):
Sys.setenv(SPARK_HOME='/path/to/spark/directory')
.libPaths(c(file.path(Sys.getenv('SPARK_HOME'), 'R', 'lib'), .libPaths()))
Finally, you can use sparkR
shell without any additional steps:
$ /path/to/spark/directory/bin/sparkR
Edit
According to Spark 2.1.0 Release Notes should be available on CRAN in the future:
Standalone installable package built with the Apache Spark release. We will be submitting this to CRAN soon.
You can follow SPARK-15799 to check the progress.
Edit 2
While SPARK-15799 has been merged, satisfying CRAN requirements proved to be challenging (see for example discussions about 2.2.2, 2.3.1, 2.4.0), and the packages has been subsequently removed (see for example SparkR was removed from CRAN on 2018-05-01, CRAN SparkR package removed?). As the result methods listed in the original post are still the most reliable solutions.
Edit 3
OK, SparkR
is back up on CRAN again, v2.4.1. install.packages('SparkR')
should work again (it may take a couple of days for the mirrors to reflect this)
Solution 2:
SparkR requires not just an R package but an entire Spark backend to be pulled in. When you want to upgrade SparkR, you are upgrading Spark, not just the R package. If you want to go with SparkR then this blogpost might help you out: https://blog.rstudio.org/2015/07/14/spark-1-4-for-rstudio/.
It should be said though: nowadays you may want to refer to the sparklyr package as it makes all of this a whole lot easier.
install.packages("devtools")
devtools::install_github("rstudio/sparklyr")
library(sparklyr)
spark_install(version = "1.6.2")
spark_install(version = "2.0.0")
It also offers more functionality than SparkR as well as a very nice interface to dplyr
.
Solution 3:
I also faced similar issue while trying to play with SparkR in EMR with Spark 2.0.0. I'll post the steps here that I followed to install rstudio server, SparkR, sparklyr, and finally connecting to a spark session in a EMR cluster:
- Install rstudio server: After the EMR cluster is up and running, ssh to the master node with user 'hadoop@' and download rstudio server
wget https://download2.rstudio.org/rstudio-server-rhel-0.99.903-x86_64.rpm
then install using yum install
sudo yum install --nogpgcheck rstudio-server-rhel-0.99.903-x86_64.rpm
finally add a user to access rstudio web console as:
sudo su
sudo useradd username
sudo echo username:password | chpasswd
- To acess rstudio Web console you need to create a SSH tunnel from your machine to the EMR master node like below:
ssh -NL 8787:ec2-emr-master-node-ip.compute-1.amazonaws.com:8787 [email protected]&
-
Now open any browser and type
localhost:8787
to go the rstudio Web console and use theusername:password
combo to login. -
To install the required R packages you need to install
libcurl
into the master node first like below:
sudo yum update
sudo yum -y install libcurl-devel
- Resolve permission issues with:
sudo -u hdfs hadoop fs -mkdir /user/
sudo -u hdfs hadoop fs -chown /user/
- Check Spark version in EMR and set
SPARK_HOME
:
spark-submit --version
export SPARK_HOME='/usr/lib/spark/'
- Now in the rstudio console install
SparkR
like below:
install.packages('devtools')
devtools::install_github('apache/[email protected]', subdir='R/pkg')
install.packages('sparklyr')
library(SparkR)
library(sparklyr)
Sys.setenv(SPARK_HOME='/usr/lib/spark')
sc <- spark_connect(master = "yarn-client")