Calculating hard disk block model reading time

My professor posted a slide on how to calculate the data retrieval from hard disk using the block model. The specs were:

  • 7200 RPM
  • 5ms SEEK
  • 80MB/s TRANSFER RATE
  • BLOCK MODEL : Block size 4KB

I don't understand how he did the following calculation or where did some of the numbers came from:

5ms + 1000/240 ms + 0.05ms = 9.216ms to read block.

Can anyone tell me where did 1000/240 ms and 0.05ms come from?

EDIT: IF the numbers happened to be completely wrong, how would you do this then?


The disk in question has a transfer rate of 80 MiB/s, or 81920 kiB/s, or 20480 blocks/s. Here, we will round off to 20,000 blocks/second, since this appears to be what your professor did. This equates to 0.05 ms to transfer a block, explaining the last term in the equation.

Finally, in addition to seek time (time to move the drive head to the track), there is also the rotational latency of the disk itself to deal with. At 7200 RPM, in the worst case, we have to wait 1 full revolution, but on average we have to wait a half-revolution - or 4.166ms (7200 RPM = 120 rev/sec = 8.333ms/rev).

Thus, to transfer one block to the computer, we must wait the equivalent Seek Time + Rotational Latency + Transfer Time:

5 ms + 4.166 ms + 0.05 ms = 9.216 ms

Note that for solid-state drives, while there is no rotational latency to take into account, there certainly still is a measurable seek time (to actually address the contents of the sectors in the flash memory) and transfer time (largely limited by the bus being used to transfer the data itself, e.g. SATA).

Thus, in general, the total access time to read a single sector for a drive is (neglecting software):

Rotational/Hard Drive:  Seek Time + Rotational Latency + Transfer Time

Solid-State Drive:      Seek Time + Transfer Time

The 5 ms seek time is the time for the harddisk to move its head to the right track (and to select the right head, something which also takes time but which your prof. ignored).


Once the head is over the right track it needs to wait for the right sector to pass beneath the R/W head. We are given that it is a 7200 RPM drive. That means:

  • In the worst case the data just passed an it has to wait a full rotation.
  • In the best case the data sector has just arrived. All is happy.
  • In the average case the drive needs to wait half a rotation.

To get the time for a full drive rotation on a 7200 RPM drive:

  • 7200 rotation per minute (aka 7200 RPM)
  • Or 7200/60 times per second.
  • Or 120 times per second.
  • Or a single rotation takes 1/120th of a second.
  • Which is 8.3 ms

So half a rotation should will take half that time, 1/240th of a second.

1 second is 1000 ms

This is your 1000/240 ms.


All of this is the time until the drive can start reading the data. It will still need to read it and pass with to the host.

Reading from the drive is usually much quicker than passing it, so I am going to focus on the slower part:

Given are:

1) 80 MB/s TRANSFER RATE
2) BLOCK MODEL : Block size 4KB

  • 80 MiB in one second, or 80*1024 KiB one one second, or 4 * 20 * 1024 KiB/second.
  • Divide by 20480.
  • 4 KB per 1/20480th of a second.
  • Or 0.488281 ms, which is your latest 0.05 ms.


Note that this answer ignores that:

  1. the drive needs to read the data before it can transmit it, this will make it slightly slower.
  2. But there is no information on how fast the data is read from the platter. (which is a matter of rotation speed, length of data to be read, length of the checksum data and inter sector gaps. (4KiB can be 8 reads of "Header|data|checksum|gap" or a single read.
  3. It also ignores that the data might already be present in the drives cache.
  4. And it assumes that calculating the checksum takes about no time flat.