Why is an Ext4 disk check so much faster than NTFS?

There are two main reasons for the performance difference, and two possible reasons. First, the main reasons:


Increased Performance of ext4 vs. NTFS

Various benchmarks have concluded that the actual ext4 file system can perform a variety of read-write operations faster than an NTFS partition. Note that while these tests are not indicative of real-world performance, we can extrapolate these results and use this as one reason.

As for why ext4 actually performs better then NTFS can be attributed to a wide variety of reasons. For example, ext4 supports delayed allocation directly. Again though, the performance gains depend strictly on the hardware you are using (and can be totally negated in certain cases).

Reduced Filesystem Checking Requirements

The ext4 filesystem is also capable of performing faster file system checks than other equivalent journaling filesystems (e.g. NTFS). According to the Wikipedia page:

In ext4, unallocated block groups and sections of the inode table are marked as such. This enables e2fsck to skip them entirely on a check and greatly reduces the time it takes to check a file system of the size ext4 is built to support. This feature is implemented in version 2.6.24 of the Linux kernel.


And now, the two possible reasons:


File System Checking Utilities Themselves

Certain applications may run different routines on filesystems to actually perform the health "check". This can easily be seen if you use the fsck utility set on Linux versus the chkdsk utility on Windows. These applications are written on different operating systems for different file systems. The reason I bring this up as a possible reason is the low-level system calls in each operating system is different, and so you may not be able to directly compare the utilities using two different operating systems.

Disk Fragmentation

This one is easy to understand, and also helps us to understand the differences between file systems. While all digital data held in a file is the same, how it gets stored on the hard drive is quite different from filesystem to filesystem. File fragmentation can obviously increase access speeds, attributing to more of a speed difference.


From my understanding ext4 tries to write data to the largest continuous gap of open inodes where no data currently resides. This severely reduces latency when those files have to be read as for the, most part, the whole content of an individual file would mostly lie on a single continuous track so the drives head would have less seeking do do when finding every block containing the data that makes up that one file.

It (ext4) can still become fragmented but much less so and not necessarily in a way that affects read/write performance severely as with NTFS. On NTFS, data is written to the first open blocks in the path of the head.

So wherever the head lies and there is open blocks it writes as much of that data as can fit then writes wherever it lands elsewhere on the disk when the head has to move, say, to another part of the disk to access a different file that has to be opened in a program you just loaded while that other file was being still being written.
This means that if the file is large it is likely to be spread out in blocks separated from each other on separate tracks and is why defragmenting is needed often for NTFS.

Also why servers generally don't use it as there is heavier I/O going on with a server where data is constantly being written and read from disk 24/7.

Also I'm not sure but if chkdsk checks the integrity of each file (which I believe both it and fsck do) then it would also be slower in comparison due to what I just described about fragmenting on NTFS.


Windows should never need to check an NTFS volume at startup. If it does, something has gone seriously wrong—something much worse than a mere BSOD or power outage. There is a significant chance that some of your data was also corrupted by whatever corrupted the filesystem metadata. The disk check can't detect that; its only purpose is to avoid further corruption.

KB2854570 lists some reasons that this can happen. One is hibernating an OS with a volume mounted, modifying the contents of the volume, then resuming from hibernation with the volume (re)attached. If you do that, there is a high probability of silent data corruption.

I don't know why your ext4 filesystem was checking itself once per week, but it was probably (hopefully) not due to a comparable crisis that recurred weekly. It was probably just doing a routine sanity check, and not a full consistency check.


Because UNIX/Linux Ext2/Ext3/Ext4 technologies lay down a much tighter magnetic strip of data bits versus NTFS laying down magnetic data bits not unlike a spray can of paint would do. NTFS needs regular "Defrag" where as Ext2/3/4 rarely need defrag. It's as simple as that. If you need something off your UNIX/Linux drive -- the OS knows exactly where to pick-up stips of tightly magnetized data bits whereas NTFS has to scramble all over the drive disks in order to pick them up. NTFS journal system works very well -- but, the heads are playing pong to pick up the magnetic bits that have been laid down. The dual FAT file system works well -- but, when you have to run all over hells acres to gather your bushels of single bits -- it's much faster if you don't have to and can pick-up whole strings of bits.