Newbetuts
.
New posts in apache-spark
Understanding Spark's caching
apache-spark
What's the meaning of "Locality Level"on Spark cluster
cluster-computing
apache-spark
How to get other columns when using Spark DataFrame groupby?
sql
apache-spark
dataframe
apache-spark-sql
Pyspark dataframe column value dependent on value from another row
dataframe
apache-spark
pyspark
apache-spark-sql
Total size of serialized results of 16 tasks (1048.5 MB) is bigger than spark.driver.maxResultSize (1024.0 MB)
python
apache-spark
pyspark
spark-dataframe
Structured streaming schema from Kafka JSON - query error
apache-spark
pyspark
apache-kafka
apache-spark-sql
spark-structured-streaming
Spark doesn't recognize the column name in SQL query while can output it to a dataset
postgresql
scala
apache-spark
apache-spark-sql
I want to count cumulatively the number of previous repeating values [duplicate]
dataframe
scala
apache-spark
apache-spark-sql
How can I connect to a postgreSQL database into Apache Spark using scala?
scala
apache-spark
psql
PySpark Windows function (lead,lag) in Synapse Workspace
python
dataframe
apache-spark
pyspark
apache-spark-sql
Accessing nested data with key/value pairs in array
json
dataframe
apache-spark
pyspark
apache-spark-sql
Get the size/length of an array column
scala
apache-spark
apache-spark-sql
Apache Spark: Differences between client and cluster deploy modes
apache-spark
apache-spark-standalone
Spark SQL Row_number() PartitionBy Sort Desc
python
apache-spark
pyspark
apache-spark-sql
window-functions
spark 2.1.0 session config settings (pyspark)
python
apache-spark
pyspark
spark-dataframe
Doing multiple column value look up after joining with lookup dataset
scala
apache-spark
apache-spark-sql
spark-streaming
Why is join not possible after show operator?
scala
apache-spark
join
apache-spark-sql
Add Number of days column to Date Column in same dataframe for Spark Scala App
scala
apache-spark
dataframe
dateadd
Why Spark SQL considers the support of indexes unimportant?
sql
apache-spark
apache-spark-sql
in-memory-database
Spark: "Truncated the string representation of a plan since it was too large." Warning when using manually created aggregation expression
apache-spark
spark-dataframe
Prev
Next