How to fill a hard drive in Linux

I'm doing some testing of a piece of code and I want to fill a hard drive with data. I found that dd can make huge files in an instant, but df disagrees. Here's what I tried:

dd if=/dev/zero of=filename bs=1 count=1 seek=$((10*1024*1024*1024))

ls -lh shows a 10G file. However, df -h shows that the partition did not shrink. So what do I need to do to make df recognize the data is now taken? I'm hoping for something fast that I code up in a unit test.


The trouble with the seek=<big number> trick is that the filesystem is (usually) clever: if part of a file has never been written to (and is therefore all zeros), it doesn't bother to allocate any space for it - so, as you've seen, you can have a 10GB file that takes up no space (this is known as a "sparse file", and can very useful in some instances, e.g. certain database implementations).

You can force the space to be allocated with (for example):

dd if=/dev/zero of=filename bs=$((1024*1024)) count=$((10*1024))

which will take much longer, but will actually fill the disk. I recommend making the block size much higher than one, because this will determine how many system calls the dd process makes - the smaller the blocksize, the more syscalls, and therefore the slower it will run. (Though beyond 1MB or so it probably won't make much difference and may even slow things down...)


As another option to this, you can use the yes along with a single string and its about 10 times faster than running dd if=/dev/urandom of=largefile. Like this

yes abcdefghijklmnopqrstuvwxyz0123456789 > largefile

You have created what is known as a "sparse file" - a file that, because most of it is empty (i.e. reads back as \0), doesn't take space on the disk besides what is actually written (1B, after 10GB of gap).

I don't believe you could make huge files, taking actual disk space in an instant - taking physical space means the filesystem needs to allocate disk blocks to your file.

I think you're stuck with the oldfashioned "dd if=/dev/zero of=filename bs=100M count=100" which is limited by your drive sequential write speed.


If you're just testing for cases with filled file systems, maybe fallocate is good enough. And faster too! e.g.

fallocate -l 150G