How bad is perfomance for accessing NTFS ssd disk from linux?

I have a small SSD with installed Windows and Ubuntu installed on HDD disk. I want to store and access some files on SSD from Linux(for performance). I don't want to make partition of SDD disk because there are not so much space left and I think it will be harmful for Windows performance. So real question is what are drawbacks of accessing files stored on NTFS disk from Linux?


Under normal conditions, there will be some difference in performance, although with proper parameters NTFS is quite usable on Linux.

Linux will always be somewhat slower. While Windows uses a native kernel driver which is a very performant low-level driver, Linux uses NTFS-3G which runs in user-space and is then inherently slower.

Some of these performance differences can be improved by following the recommendations listed in the Tuxera NTFS-3G FAQ and especially the big_writes option (useful when copying big files but not recommended when updating them).

Further reading:

  • https://askubuntu.com/questions/93906/ntfs-drive-mounted-generates-huge-load

  • NTFS write speed really slow (<15MB/s) on Ubuntu

  • Are there faster solutions for NTFS on Linux than NTFS-3G?


Since this is the top organic search result, here are some fresh data from 2021.

blk + fs 100MB 100MB overwrite 1GB 1GB overwrite
raw ntfs 75 75 73 73
raid0 ntfs 75 75 73 73
raw fat 157 143 157 157
raw ext4 980 180 166 166
raid0 ext4 850 180 855 362

(All numbers are MiB/s. Higher is better)

All partitions created with gnome-disks default options

gnome-disk-utility 40.1 UDisks 2.9.2 (built against 2.9.2)

tested with time dd if=/dev/zero of=file.txt bs=1M count=1000 and count=100. Ran ~10 times on freshly awaken but idle disks. All connected to sata2/3gps (not 6). all between 47C and 50C.

by overwrite on the table i mean dd outputting to an existing FS file instead of a new one.

using kernel 5.12 with

flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm cpuid_fault epb pti tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms xsaveopt dtherm ida arat pln pts
vmx flags       : vnmi preemption_timer invvpid ept_x_only flexpriority tsc_offset vtpr mtf vapic ept vpid unrestricted_guest
bugs            : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit srbds

both raid0 tests were with partitions on the start of the disk, using 2 identical disks used for the raw tests. The raw tests were all with partitions on the last 3/4 of the disk and averaged between both disks (numbers never diverged between them)