How to force re-read XFS file system contents
Solution 1:
XFS isn't a cluster filesystem and therefore hasn't any facility to do what you're asking for (well, there existed a proprietary -- and expensive -- clustered version known as CXFS, but that's another story).
The correct solution is to use a cluster filesystem. There are a lot of them, unfortunately generally quite complex to set up.
CentOS offers GFS2, which is quite difficult to set up IMO; I personally prefer OCFS2, which is extremely easy to set up and use on Debian and derivatives (and probably Oracle Linux, too), and offers very good performance, only lacking extended attributes and ACLs (which is usually of little importance in cluster setups anyway).
See for instance this guide.
Solution 2:
You can drop filesystem cache and trigger re-read as:
echo 3 > /proc/sys/vm/drop_caches
Solution 3:
Please don't do that. As explained by @wazoox answer (which I upvoted) XFS is not a cluster filesystem.
At minimum you will have cache coherency issues, as described in your original question. However, if not mounting with -o norecovery,ro
options (ie: both norecovery
and ro
) you risk corrupting your filesystem.
Please consider using NFS on a "master" node to export the storage to the other volumes. If you can't do that, you have two options:
-
use a cluster-aware filesystem as GFS2 or OCFS2. A cluster aware filesystem supports multiple concurrent mounts from different running kernels, where each node has a dedicated journal (ie: "a window") for writing to the main filesystem. Be aware that the cache coherency required between the various nodes can significantly lower performance, especially when reading something that was already in another node cache;
-
use a scale-out, distributed filesystem as Gluster, Ceph and the likes. The main difference between a cluster vs distributed filesystem is that the latter really is an aggregation between other filesystems, each local to a node. This aggregation is generally done via a user-space application (ie: gluster client) and it greatly impairs performance versus a classical local POSIX filesystem. However, you can aggregate tens or hundreds of nodes, with capacity and speed scaling as you add nodes.
While the only method to find the best approach for you specific case is to test the various approaches with a representative workload, I would suggest keeping things simple at first and to try with a NFS share.