Would it be possible to automatically split a table into several files based on column values if I don't know how many different key values the table contains? Is it possible to put the key value into the filename?


This is our top ask (and has been previously asked on stackoverflow too :). We are currently working on it and hopefully have it available by summer.

Until then you have to write a script generator. I tend to use U-SQL to generate the script but you could do it with Powershell or T4 etc.

Here is an example:

Let's assume you want to write files for the column name in the following table/rowset @x:

name | value1 | value2
-----+--------+-------
A    | 10     | 20
A    | 11     | 21
B    | 10     | 30
B    | 100    | 200

You would write a script to generate the script like the following:

@x = SELECT * FROM (VALUES( "A", 10, 20), ("A", 11, 21), ("B", 10, 30), ("B", 100, 200)) AS T(name, value1, value2);

// Generate the script to do partitioned output based on name column:

@stmts = 
  SELECT "OUTPUT (SELECT value1, value2 FROM @x WHERE name == \""+name+"\") TO \"/output/"+name+".csv\" USING Outputters.Csv();" AS output 
  FROM (SELECT DISTINCT name FROM @x) AS x;

OUTPUT @stmts TO "/output/genscript.usql" 
USING Outputters.Text(delimiter:' ', quoting:false);

Then you take genscript.usql, prepend the calculation of @x and submit it to get the data partitioned into the two files.


There's a new feature in public preview:

SET @@FeaturePreviews = "DataPartitionedOutput:on";

You can add it at the beginning of the script, and the output data can be partitioned by the key you choose:

SET @@FeaturePreviews = "DataPartitionedOutput:on";

@result =
SELECT date.Hour AS hour,
       COUNT( * ) AS total,
       ANY_VALUE(message) AS sample_message,
       ANY_VALUE(stack_trace) AS sample_trace
FROM @data
WHERE date.Year == 2018
      AND date.Month == 7
GROUP BY date.Hour
HAVING COUNT( * ) > 80;

OUTPUT @result
TO "/output/parquetfiles-{hour}.csv"
ORDER BY hour DESC,
     total DESC
USING Outputters.Csv(outputHeader : true);

Another example can be found in article
Process more files than ever and use Parquet with Azure Data Lake Analytics
section "Putting it all together in a simple end-to-end example".


Great question! I'll be interested to see what Mr Rys responds with.

Apologies, but this is only half an answer.

My first thoughts are to partition an ADL table using your key value. But then I'm not sure how you'd deal with the separate outputs if a potential WHERE clause isn't deterministic. Maybe CROSS JOIN in every result and .... pass!

It would be nice to have a WHILE loop with some dynamic code!

Check out this post on the MS forums that talks about dynamic input datasets. Just as an FYI.

https://social.msdn.microsoft.com/Forums/en-US/aa475035-2d57-49b8-bdff-9cccc9c8b48f/usql-loading-a-dynamic-set-of-files?forum=AzureDataLake