What network file sharing protocol has the best performance and reliability? [closed]

We have a setup with a few web servers being load-balanced. We want to have some sort of network shared storage that all of the web servers can access. It will be used as a place to store files uploaded by users. Everything is running Linux.

Should we use NFS, CIFS, SMB, fuse+sftp, fuse+ftp? There are so many choices out there for network file sharing protocols, it's very hard to pick one. We basically just want to permanently mount this one share on multiple machines. Security features are less of a concern because it won't be network accessible from anywhere other than the servers mounting it. We just want it to work reliably and quickly.

Which one should we use?


Solution 1:

I vote for NFS.

NFSv4.1 added the Parallel NFS pNFS capability, which makes parallel data access possible. I am wondering what kind of clients are using the storage if only Unix-like then I would go for NFS based on the performance figures.

Solution 2:

The short answer is use NFS. According to this shootout and my own experience, it's faster.

But, you've got more options! You should consider a cluster FS like GFS, which is a filesystem multiple computers can access at once. Basically, you share a block device via iSCSI which is a GFS filesystem. All clients (initiators in iSCSI parlance) can read and write to it. Redhat has a whitepaper . You can also use oracle's cluster FS OCFS to manage the same thing.

The redhat paper does a good job listing the pros and cons of a cluster FS vs NFS. Basically if you want a lot of room to scale, GFS is probably worth the effort. Also, the GFS example uses a Fibre Channel SAN as an example, but that could just as easily be a RAID, DAS, or iSCSI SAN.

Lastly, make sure to look into Jumbo Frames, and if data integrity is critical, use CRC32 checksumming if you use iSCSI with Jumbo Frames.