What's faster? Moving files from one drive to another, or moving files in the same drive?

Moving a file inside the same partition (or same file-system) won't really move anything.

All it would do would be to delete its entry in the file-table and create another. The file itself will not be physically moved, so the operation will be almost instantaneous, no matter the size of the file.


It depends on a bit more than just the physical hardware layout. In general, there are four cases to consider:

  1. Moving a file within a single filesystem (IOW, within a single ‘drive’ by the Windows definition of the term ‘drive’).
  2. Copying a file within a single filesystem.
  3. Moving or copying a file between two filesystems that are on the same physical storage device.
  4. Moving or copying a file between two filesystems that are on separate physical storage devices.

In general, the first case is always going to be the fastest, because short of some atypical situations, it just amounts to updating some of the filesystem metadata. The only two exceptions you would likely ever encounter are dealing with in-line data transformations (such as the in-line compression supported by NTfS) where the source and destination have different rules for such transformations, and dealing with certain networked filesystems (such as older versions of NFS), with both cases becoming equivalent to the third case.

The speed of the second case depends on the filesystem involved. If it supports reflinks (like ZFS and BTRFS do), then it can be just as fast as the first case (because it essentially becomes the first case). If it does not, then it will generally be equivalent to the third case instead.

The third case will usually be the slowest case, because the system has to read the data from the device, store it temporarily in RAM, and then write it back out to the device somewhere else. Some storage protocols may support ‘device-side copy’ functionality (some SCSI devices support this for example, as do most intelligent networked filesystem protocols), in which case this can potentially be rather fast, though usually still slower than the first case.

The fourth case is where things get really interesting, because it’s performance depends almost entirely on the specifics of the hardware setup of the physical storage devices. Some easy examples of this:

  • In a classic PATA setup with both storage devices on the same cable, the performance is actually marginally worse than the third case. This is because you have a single data path shared by both devices, and on top of the read/write cycle you would normally have for the same device, you end up with some extra overhead just for managing the two devices at the same time.
  • In a relatively standard SATA setup with both storage devices on the same AHCI SATA controller, performance will be significantly better than the third case, but still nowhere near peak device bandwidth. This is largely due to limitations in the AHCI spec that make it at best challenging to handle multiple devices simultaneously on a single controller (the implications are not bad enough to make it a problem for consumer usage, but are part of the reason that SCSI still reigns supreme in enterprise usage).
  • With a typical enterprise SAS setup, the performance will be relatively close to the peak bandwidth of the slower of the two devices, provided it’s the only thing running at the time. SAS is quite simply exponentially more efficient than SATA.
  • With a pair of very nice NVMe devices, just the right hardware layout on the mainboard, and proper support in the OS, 99% of the transfer can actually run at peak bandwidth of the slower of the two devices. This setup is hard to put together, but allows you to leverage a little-used feature of PCI-e that allows two devices to transfer data directly without needing to bother the host.