Creating a large size file in less time
I want to create a large file ~10G filled with zeros and random values. I have tried using:
dd if=/dev/urandom of=10Gfile bs=5G count=10
It creates a file of about 2Gb and exits with a exit status '0'. I fail to understand why?
I also tried creating file using:
head -c 10G </dev/urandom >myfile
It takes about 28-30 mins to create it. But I want it created faster. Anyone has a solution?
Also i wish to create multiple files with same (pseudo) random pattern for comparison. Does anyone know a way to do that?
Solution 1:
How about using fallocate, this tool allows us to preallocate space for a file (if the filesystem supports this feature). For example, allocating 5GB of data to a file called 'example', one can do:
fallocate -l 5G example
This is much faster than dd, and will allocate the space very rapidly.
Solution 2:
You can use dd
to create a file consisting solely of zeros. Example:
dd if=/dev/zero of=zeros.img count=1 bs=1 seek=$((10 * 1024 * 1024 * 1024 - 1))
This is very fast because only one byte is really written to the physical disc. However, some file systems do not support this.
If you want to create a file containing pseudo-random contents, run:
dd if=/dev/urandom of=random.img count=1024 bs=10M
I suggest that you use 10M as buffer size (bs
). This is because 10M is not too large, but it still gives you a good buffer size. It should be pretty fast, but it always depends on your disk speed and processing power.
Solution 3:
Using dd, this should create a 10 GB file filled with random data:
dd if=/dev/urandom of=test1 bs=1M count=10240
count
is in megabytes.
Source: stackoverflow - How to create a file with a given size in Linux?
Solution 4:
Answering the first part of your question:
Trying to write a buffer of 5GB at a time is not a good idea as your kernel probably doesn't support that. It won't give you any performance benefit in any case. Writing 1M at a time is a good maximum.