Why is “/dev/rdisk” about 20 times faster than “/dev/disk” in Mac OS X
According to the rasbery pi documentation, You can load your OS to a flash card with either /dev/disk or /dev/rdisk.
rdisk stands for raw disk.
/dev/disk is a block level device, why would rdisk be 20 times faster?
Using Mac OSX
Note: In OS X each disk may have two path references in /dev: /dev/disk# is a buffered device, which means any data being sent undergoes extra processing. /dev/rdisk# is a raw path, which is much faster, and perfectly OK when using the dd program. On a Class 4 SD card the difference was around 20 times faster using the rdisk path.
From man hdiutil
:
/dev/rdisk nodes are character-special devices, but are "raw" in the BSD sense and force block-aligned I/O. They are closer to the physical disk than the buffer cache. /dev/disk nodes, on the other hand, are buffered block-special devices and are used primarily by the kernel's filesystem code.
In layman's terms /dev/rdisk
goes almost directly to disk and /dev/disk
goes via a longer more expensive route
The accepted answer is right, but it doesn’t go into much detail.
One of the key differences between /dev/disk
and /dev/rdisk
, when you access them from user space, is that /dev/disk
is buffered. The read/write path for /dev/disk
breaks up the I/O into 4KB chunks, which it reads into the buffer cache, and then copies into the user space buffer (and then issues the next 4KB read…). This is nice in that you can do unaligned reads and writes, and it just works. In contrast, /dev/rdisk
basically just passes the read or write straight to the device, which means the start and end of the I/O need to be aligned on sector boundaries.
If you do a read or write larger than one sector to /dev/rdisk
, that request will be passed straight through. The lower layers may break it up (eg., USB breaks it up into 128KB pieces due to the maximum payload size in the USB protocol), but you generally can get bigger and more efficient I/Os. When streaming, like via dd
, 128KB to 1MB are pretty good sizes to get near-optimal performance on current non-RAID hardware.
The caching being done by /dev/disk
’s read and write paths is very simple and almost brain dead. It caches even if not strictly necessary; like if the device could memory map and directly transfer into your app's buffer. It does small (4KB) I/Os, which leads to a lot of per-I/O overhead. It does not do any read ahead or write behind.