Why is Linux 30x faster than Windows 10 in Copying files?

I got 20.3 Gig of files and folders totaling 100k+ items. I duplicated all those files in one directory from Windows 10, and it took me an excruciating 3hrs of copying. Done.

The other day, I booted in Linux Fedora 24, recopied the same folder and bam! It took me just 5 mins to duplicate it on the same place but different directory.

Why is Linux so Fast? And Windows is painstakingly slow?

There is a similar question here

Is (Ubuntu) Linux file copying algorithm better than Windows 7?

But the accepted answer is quite lacking.


The basics of it break down to a few key components of the total system: the UI element (the graphical part), the kernel itself (what talks to the hardware), and the format in which the data is stored (i.e. the file system).

Going backwards, NTFS has been the de-facto for Windows for some time, while the de-facto for the major Linux variants is the ext file system. The NTFS file system itself hasn't changed since Windows XP (2001), a lot of features that exist (like partition shrinking/healing, transactional NTFS, etc.) are features of the OS (Windows Vista/7/8/10) and not NTFS itself. The ext file system had it's last major stable release (ext4) in 2008. Since the file system itself is what governs how and where files are accessed, if you're using ext4 there's a likely chance you'll notice an improvement to speed over NTFS; note however if you used ext2 you might notice that it's comparable in speed.

It could be as well that one partition is formatted in smaller chunks than the other. The default for most systems is a 4096 byte 1, 2 cluster size, but if you formatted your ext4 partition to something like 16k 3 then each read on the ext4 system would get 4x the data vs. the NTFS system (which could mean 4x the files depending on what's stored where/how and how big, etc.). Fragmentation of the files can also play a role in speeds. NTFS handles file fragmentation very differently than the ext file system, and with 100k+ files, there's a good chance there's some fragmentation.

The next component is the kernel itself (not the UI, but the code that actually talks to the hardware, the true OS). Here, there honestly isn't much difference. Both kernels can be configured to do certain things, like disk caching/buffering, to speed up reads and perceived writes, but these configurations usually have the same trade-offs regardless of OS; e.g. caching might seem to massively increase the speed of copying/saving, but if you lose power during the cache write (or pull the USB drive out), then you will lose all data not actually written to disk and possibly even corrupt data already written to disk.

As an example, copy a lot of files to a FAT formatted USB drive in Windows and Linux. On Windows it might take 10 minutes while on Linux it will take 10 seconds; immediately after you've copied the files, safely remove the drive by ejecting it. On Windows it would be immediately ejected from the system and thus you could remove the drive from the USB port, while on Linux it might take 10 minutes before you could actually remove the drive; this is because of the caching (i.e. Linux wrote the files to RAM then wrote them to the disk in the background, while the cache-less Windows wrote the files immediately to disk).

Last is the UI (the graphical part the user interacts with). The UI might be a pretty window with some cool graphs and nice bars that give me a general idea of how many files are being copied and how big it all is and how long it might take; the UI might also be a console that doesn't print any information except when it's done. If the UI has to first go through each folder and file to determine how many files there are, plus how big they are and give a rough estimate before it can actually start copying, then the copy process can take longer due to the UI needing to do this. Again, this is true regardless of OS.

You can configure some things to be equal (like disk caching or cluster size), but realistically speaking it simply comes down to how all the parts tie together to make the system work and more specifically how often those pieces of code actually get updated. The Windows OS has come a long way since Windows XP, but the disk sub-system is an area that hasn't seen much TLC in the OS across all versions for many years (compared to the Linux ecosystem that seems to see some new FS or improvement rather frequently).

Hope that adds some clarity.


Windows has less performance because they don't carry about HDDs. Window's write cache is normally deactivated on external devices. You can enable it - but you can't deactivate timed flushing of buffer from windows. Linux itself has a better write caching. My external HDD on Ubuntu: 100MB/s, down to 40MB/s. On windows: Once a bit faster, then 19-20MB/s. You might use SSDs - that's much faster on windows. Generally windows isn't doing much anymore about performance. Linux has better algorythms. It is already faster than windows with better process sheduling etc. It dosesn't really differentate if you have NTFS or ext. I copied twice on the same partition and same drive. I think there is no way to speed up windows' write performance because you don't even have acces to the whole system and it isn't open source

Just use Linux to copy large files :D