Joining many large files on AWS

I am looking for advice which service should I use. I am new to big data and confused with differences between them on AWS.

Use case:

I receive 60-100 csv files daily (each one can be from few MB to few GB). There are six corresponding schemas, and each file can be treated as part of only one table. I need to load those files to the six database tables and execute joins between them and generate daily output. After generation of the output, the data present in database is no longer need, so we can truncate that tables and await on the next day.

Files have predictable naming patterns:

  • A_<timestamp1>.csv goes to A table
  • A_<timestamp2>.csv goes to A table
  • B_<timestamp1>.csv goes to B table
  • etc ...

Which service could be used for that purpose?

  • AWS Redshift (execute here joins)
  • AWS Glue (load to redshift)
  • AWS EMR (spark)

or maybe something else? I heard that spark could be used to do the joins, but what is the proper, optimal and performant way of doing that?

Edit:

Thanks for the responses. I see two options for now:

  1. Use AWS Glue, setup 6 crawlers which will load on trigger files to specific AWS Glue Data Catalogs, ​execute SQL joins with Athena
  2. Use AWS Glue, setup 6 crawlers which will load on trigger files to specific AWS Glue Data Catalogs, trigger spark job (AWS Glue in serverless form) to do the SQL joins and setup output to the S3.

Edit 2:

But according to the: https://carbonrmp.com/knowledge-hub/tech-engineering/athena-vs-spark-lessons-from-implementing-a-fully-managed-query-system/

Presto is designed for low latency and uses a massively parallel processing (MPP) approach which is fast but requires everything to happen at once and in memory. It’s all or nothing, if you run out of memory, then “Query exhausted resources at this scale factor”. Spark is designed for scalability and follows a map-reduce design [1]. The job is split and processed in chunks, which are generally processed in batches. If you double the workload without changing the resource, it should take twice as long instead of failing [2]

So Athena (aka Presto) is not scalable as much as I want. I've seen "Query exhausted resources at this scale factor" for my case.


Any possibility of changing the file type to a columnar format like parquet? Then you can use AWS EMR and spark should be able to handle the joins easily. Obviously, you need to optimize the query depending on the data/cluster size etc.