Using jq how can I split a very large JSON file into multiple files, each a specific quantity of objects?
[EDIT: This answer has been revised in accordance with the revision to the question.]
The key to using jq to solve the problem is the -c
command-line option, which produces output in JSON-Lines format (i.e., in the present case, one object per line). You can then use a tool such as awk
or split
to distribute those lines amongst several files.
If the file is not too big, then the simplest would be to start the pipeline with:
jq -c '.[]' INPUTFILE
If the file is too big to fit comfortably in memory, then you could use jq's streaming parser, like so:
jq -cn --stream 'fromstream(1|truncate_stream(inputs))'
For further discussion about the streaming parser, see e.g. the relevant section in the jq FAQ: https://github.com/stedolan/jq/wiki/FAQ#streaming-json-parser
Partitioning
For different approaches to partitioning the output produced in the first step, see for example How to split a large text file into smaller files with equal number of lines?
If it is required that each of the output files be an array of objects, then I'd probably use awk
to perform both the partitioning and the re-constitution in one step, but there are many other reasonable approaches.
If the input is a sequence of JSON objects
For reference, if the original file consists of a stream or sequence of JSON objects, then the appropriate invocation would be:
jq -n -c inputs INPUTFILE
Using inputs
in this manner allows arbitrarily many objects to be processed efficiently.
It is possible to slice a json file or stream with jq
.
See the script below.
The sliceSize
parameter sets the size of the slices and determines how many inputs are kept in memory at the same time.
This allows the memory usage to be controlled.
Input to be sliced
The input does not have to be formatted.
As input is possible:
- an array of Json inputs
- a stream of Json inputs
Sliced output
The files can be created with formatted or compact Json
The sliced output files can contain:
- an array of Json inputs with size=$sliceSize
- a stream of Json inputs with $sliceSize items
Performance
A quick benchmark shows the time and memory consumption during slicing (measured on my laptop)
file with 100.000 json objects, 46MB
- sliceSize=5.000 : time=35 sec
- sliceSize=10.000 : time=40 sec
- sliceSize=25.000 : time=1 min
- sliceSize=50.000 : time=1 min 52 sec
file with 1.000.000 json objects, 450MB
- sliceSize=5000 : time=5 min 45 sec
- sliceSize=10.000 : time=6 min 51 sec
- sliceSize=25.000 : time=10 min 5 sec
- sliceSize=50.000 : time=18 min 46 sec, max memory consumption: ~150 MB
- sliceSize=100.000 : time=46 min 25 sec, max memory consumption: ~300 MB
#!/bin/bash
SLICE_SIZE=2
JQ_SLICE_INPUTS='
2376123525 as $EOF | # random number that does not occur in the input stream to mark the end of the stream
foreach (inputs, $EOF) as $input
(
# init state
[[], []]; # .[0]: array to collect inputs
# .[1]: array that has collected $sliceSize inputs and is ready to be extracted
# update state
if .[0] | length == $sliceSize # enough inputs collected
or $input == $EOF # or end of stream reached
then [[$input], .[0]] # create new array to collect next inputs. Save array .[0] with $sliceSize inputs for extraction
else [.[0] + [$input], []] # collect input, nothing to extract after this state update
end;
# extract from state
if .[1] | length != 0
then .[1] # extract array that has collected $sliceSize inputs
else empty # nothing to extract right now (because still collecting inputs into .[0])
end
)
'
write_files() {
local FILE_NAME_PREFIX=$1
local FILE_COUNTER=0
while read line; do
FILE_COUNTER=$((FILE_COUNTER + 1))
FILE_NAME="${FILE_NAME_PREFIX}_$FILE_COUNTER.json"
echo "writing $FILE_NAME"
jq '.' > $FILE_NAME <<< "$line" # array of formatted json inputs
# jq -c '.' > $FILE_NAME <<< "$line" # compact array of json inputs
# jq '.[]' > $FILE_NAME <<< "$line" # stream of formatted json inputs
# jq -c '.[]' > $FILE_NAME <<< "$line" # stream of compact json inputs
done
}
echo "how to slice a stream of json inputs"
jq -n '{id: (range(5) + 1), a:[1,2]}' | # create a stream of json inputs
jq -n -c --argjson sliceSize $SLICE_SIZE "$JQ_SLICE_INPUTS" |
write_files "stream_of_json_inputs_sliced"
echo -e "\nhow to slice an array of json inputs"
jq -n '[{id: (range(5) + 1), a:[1,2]}]' | # create an array of json inputs
jq -n --stream 'fromstream(1|truncate_stream(inputs))' | # remove outer array to create stream of json inputs
jq -n -c --argjson sliceSize $SLICE_SIZE "$JQ_SLICE_INPUTS" |
write_files "array_of_json_inputs_sliced"
output of script
how to slice a stream of json inputs
writing stream_of_json_inputs_sliced_1.json
writing stream_of_json_inputs_sliced_2.json
writing stream_of_json_inputs_sliced_3.json
how to slice an array of json inputs
writing array_of_json_inputs_sliced_1.json
writing array_of_json_inputs_sliced_2.json
writing array_of_json_inputs_sliced_3.json
generated files
array_of_json_inputs_sliced_1.json
[
{
"id": 1,
"a": [1,2]
},
{
"id": 2,
"a": [1,2]
}
]
array_of_json_inputs_sliced_2.json
[
{
"id": 3,
"a": [1,2]
},
{
"id": 4,
"a": [1,2]
}
]
array_of_json_inputs_sliced_3.json
[
{
"id": 5,
"a": [1,2]
}
]