Databricks - transfer data from one databricks workspace to another

How can I transform my data in databricks workspace 1 (DBW1) and then push it (send/save the table) to another databricks workspace (DBW2)?

On the DBW1 I installed this JDBC driver.

Then I tried:

(df.write
 .format("jdbc")
 .options(
   url="jdbc:spark://<DBW2-url>:443/default;transportMode=http;ssl=1;httpPath=<http-path-of-cluster>;AuthMech=3;UID=<uid>;PWD=<pat>",
   driver="com.simba.spark.jdbc.Driver",
   dbtable="default.fromDBW1"
 )
 .save()
)

However, when I run it I get:

java.sql.SQLException: [Simba][SparkJDBCDriver](500051) ERROR processing query/statement. Error Code: 0, SQL state: org.apache.hive.service.cli.HiveSQLException: Error running query: org.apache.spark.sql.catalyst.parser.ParseException: 

How to do this correctly?

Note: each DBW is in different subscription.


From my point of view, the more scalable way would be to write directly into ADLS instead of using JDBC. But this needs to be done as following:

  • You need to have a separate storage account for your data. Anyway, use of DBFS Root for storage of the actual data isn't recommended as it's not accessible from outside - that makes things, like, migration, more complicated.

  • You need to have a way to access that storage account (ADLS or Blob storage). You can use access data directly (via abfss:// or wasbs:// URLs)

  • In the target workspace you just create a table for your data written - so called unmanaged table. Just do (see doc):

create table <name>
using delta
location 'path_or_url_to data'