Why is GNU shred faster than dd when filling a drive with random data?

Solution 1:

Shred uses an internal pseudorandom generator

By default these commands use an internal pseudorandom generator initialized by a small amount of entropy, but can be directed to use an external source with the --random-source=file option. An error is reported if file does not contain enough bytes.

For example, the device file /dev/urandom could be used as the source of random data. Typically, this device gathers environmental noise from device drivers and other sources into an entropy pool, and uses the pool to generate random bits. If the pool is short of data, the device reuses the internal pool to produce more bits, using a cryptographically secure pseudorandom number generator. But be aware that this device is not designed for bulk random data generation and is relatively slow.

I'm not persuaded that random data is any more effective than a single pass of zeroes (or any other byte value) at obscuring prior contents.

To securely decommission a drive, I use a big magnet and a large hammer.

Solution 2:

I guess it would be caused rather by dd using smaller chunks to write the data. Try dd if=... of=... bs=(1<<20) to see if it performs better.