Is it reasonable to use NFS on a production web server?

Can NFS be reasonably used on production servers as a means of connecting a compute server to a storage server, assuming the connection is over a LAN 1Gbe or 10Gbe connection?

There's obviously some network overhead and NFS seems particularly slower with writes if you have sync mode enabled. Otherwise it seems reasonably lightweight and able to scale from what I can tell, but I have little experience with it personally. Am I wrong?

The problem is I have a server right now that acts as both the storage and web server but I'm going to end up needing to split the two likely in the future, and considering some requests need to pass through the web application layer for authentication before initializing the file transfer, it gets kind of tricky with this software. A network fs mount is the simplest option I just.. don't know if that's a good one.

I also plan to try and utilize local caching with NFS which should improve performance a good bit, but I'm not sure if that's enough.

As far as alternatives, there's only iSCSI that I'm aware of as a real competitor, and most people seem to recommend NFS over any of the other lesser known ones.


NFS is fine, barring some specific other criteria are met, namely:

  • The systems involved are both able to use NFS natively. Windows doesn't count here, it kind of works, but it's got a lot of quirks and is often a pain to work with when dealing with NFS in a cross-platform environment (and if it's just Windows, use SMB3, it eliminates most of the other issues with NFS). Note that on the client side, this means kernel-level support, because a user-level implementation either has to deal with the efficiency issues inherent in using something like FUSE, or it has to be linked directly into the application that needs to access the share.
  • You've properly verified how the NFS client handles an NFS server restart. This includes both the OS itself (which should be fine in most cases), and the software that will be accessing the share. In particular, special care is needed on some client platforms when the software using the share holds files open for extended periods of time, as not all NFS client implementations gracefully handle server restarts by explicitly remounting and revalidating locks and file handles like they should (which leads to all kinds of issues for the client software). Note that you should recheck this any time any part of the stack is either upgraded or reconfigured.
  • You're willing to set up proper user/group ID mapping. This is big, because without it you either need to mirror the UID/GID mappings between the systems (doable, but I'd be wary of setting up SSO against an internal network for an internet facing system) or you end up with potentially serious security implications (namely, what you see on one system for permissions does not match what you see on others).
  • You're operating over a secured network link, or are willing to properly set up authentication for the share. Without auth, anybody on the link can access it (and a malicious client can easily side-step the basic UNIX discretionary access controls).

Assuming you meet all those criteria, and you have a reasonably fast network, you should be fine. Also, if you can run jumbo frames, do so, they help a lot for any network filesystem or networked block storage.


NFS is absolutely OK and is preferred over iSCSI due to the fact NFS is much easier to manage, share and backup.


We've been using NFS for years to attach our SAN to our VMware ESXi servers, running hundreds of VMs on it. No trouble at all.

The bottleneck is rather the storage system than the network protocol.

The network connection should be fast enough of course, meaning 10Gb Ethernet or fibre. We don't even bother with a separate storage network anymore.


iSCSI might be a bit faster...

https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/storage_protocol_comparison-white-paper.pdf

https://www.hyper-v.io/whos-got-bigger-balls-testing-nfs-vs-iscsi-performance-part-3-test-results/

...but NFS like any other network redirector (SMB3, AFS/AFP etc) allows concurrent multi-access which is tricky with iSCSI or other block protocols.

https://forums.starwindsoftware.com/viewtopic.php?f=5&t=1392