Pyspark: Write to AWS S3 error: S3AFileSystem not found [duplicate]

I was able to address the above to make sure I had the correct versions of the hadoop aws jar per the version of spark hadoop that I was running, downloading the correct version of aws-java-sdk, and lastly downloading the dependency jets3t library

enter image description here

In the /opt/spark/jars

sudo wget https://repo1.maven.org/maven2/com/amazonaws/aws-java-sdk/1.11.30/aws-java-sdk-1.11.30.jar
sudo wget https://repo1.maven.org/maven2/org/apache/hadoop/hadoop-aws/2.7.3/hadoop-aws-2.7.3.jar
sudo wget https://repo1.maven.org/maven2/net/java/dev/jets3t/jets3t/0.9.4/jets3t-0.9.4.jar

Testing it out

scala> sc.hadoopConfiguration.set("fs.s3n.awsAccessKeyId", [ACCESS KEY ID])
scala> sc.hadoopConfiguration.set("fs.s3n.awsSecretAccessKey", [SECRET ACCESS KEY] )
scala> val myRDD = sc.textFile("s3n://adp-px/baby-names.csv")
scala> myRDD.count()
res2: Long = 49

If S3 access is by assume_role from local cluster then below worked for me.

import boto3
import pyspark as pyspark
from pyspark import SparkContext

session = boto3.session.Session(profile_name='profile_name')
sts_connection = session.client('sts')
response = sts_connection.assume_role(RoleArn='arn:aws:iam:::role/role_name', RoleSessionName='role_name',DurationSeconds=3600)
credentials = response['Credentials']

conf = pyspark.SparkConf()

conf.set('spark.jars.packages', 'org.apache.hadoop:hadoop-aws:3.2.0')  //crosscheck the version. 

sc = SparkContext(conf=conf)
sc._jsc.hadoopConfiguration().set('fs.s3a.aws.credentials.provider', 'org.apache.hadoop.fs.s3a.TemporaryAWSCredentialsProvider')
sc._jsc.hadoopConfiguration().set('fs.s3a.access.key', credentials['AccessKeyId'])
sc._jsc.hadoopConfiguration().set('fs.s3a.secret.key', credentials['SecretAccessKey'])
sc._jsc.hadoopConfiguration().set('fs.s3a.session.token', credentials['SessionToken'])
url = str('s3a://data.csv')

l1 = sc.textFile(url).collect()
for each in l1:
    print(str(each))
    break

keep below proper version of class files also in $SPARK_HOME/jars

  1. jets3t
  2. aws-java-sdk
  3. hadoop-aws

I prefer to delete unwanted jars from ~/.ivy2/jars


Each hadoop version should match aws-java-sdk-...jar, hadoop-aws-...jar.

And every aws-java-sdk version matched with hadoop-aws-..jar (it does not mean the same number).

For example ( aws-java-sdk-bundle-1.11.375.jar, hadoop-aws-3.2.0.jar are pair versions).

Lastly you should enroll the s3 domain in the hive.cnf configuration file.