Sequential vs Random I/O on SSDs?
Your premise is flawed. The fact that there is essentially zero seek latency on an SSD means that the fact that the data isn't stored as a linear mapping is irrelevant to the performance of the device, and therefore you can treat it functionally as a simple linear mapping of blocks. Also note however, that the article you are quoting isn't entirely accurate. A filesystem block is usually 4kB, while the 'blocks' of data on the SSD are actually usually 4MB in size, so it's very possible for a file less than 4MB in length to be entirely contained in one physical location on the SSD.
There are two other reasons that sequential I/O performance matters though:
- Dispatching an I/O request to a device is not free, it takes time for the OS to prepare it, and it takes time for the OS to clean up once the data has been transferred. By doing sequential I/O, you can reduce the number of I/O requests you have to dispatch, and therefore further minimize the overhead of accessing the device. Given this, the sequential I/O rating gives you a functional upper bound on how fast you can get data onto or off of the device (running a SSD rated for 500MB/s and 10000 IOPS at that 10000 IOPS will never get you performance any better than 500MB/s).
- People still do bulk data transfers on or off of SSD's, so it is still useful to know how fast those will run.