It is XFS reliable? In case of a power gone off is xfs riskier that ext3 in data corruption/safey?

My personal experience with XFS has been that the fsck for it is not as good as for ext3. We ran our mirror server on XFS, with around 3TB of space for linux distros and the like which we mirror. At one point it had some issue (we didn't have a power failure, IIRC it just started reporting errors). So on a reboot it wanted to do an fsck. However, the fsck took more RAM than the 2GB we had on the system, so it started swapping. After 3 days it ran out of memory. I maxed out the box to 3GB and it was then able to complete the fsck fairly quickly. I know 3GB isn't much ram these days, but at the time that was a pretty sizable box.

I also tried XFS on my laptop for a while. This is more closely like your "power failure" situation, because I was having problems with my laptop locking up, so I had to hard power cycle it somewhat frequently. I ran into several cases where files I had been working on prior to the crash would revert to a copy several hours old. I'd edit a file and save it several times while working on it, then the system would lock up and I'd be back to several hours before.

Because of these issues, I tend to avoid XFS. It seems like the XFS fsck isn't as mature as EXT, probably because XFS almost never has to fsck but ext2/3/4 do so regularly.

But, I'll admit that these experiences were probably 5 years ago. Hopefully they're better now. Just thought I'd pass along my experience.

In retrospect, I realize that EXT3 at that time also had corruption issues. But I run hundreds of EXT3-based servers now and can't remember the last time a hard power cycle caused corruption.

Really, the thing that really soured me on XFS was the fsck taking so much RAM and thrashing the system. In the case of our mirror server, it could afford to be down. If this was a business server, knowing that fsck would take a while but not days would have been critical.


nobody can answer this question better than youself.

in other words, test it.

it's not hard to do: take a typical machine, make it perform a similar load to what you want to do (in my case it was about copying small files between two SAN volumes). and while it's under heavy load, make it fail. try every failure you can imagine (in my case it was mostly about pulling the plug on one volume, on the other, on the server and on the SAN switch).

repeat with all candidate filesystems in my case it was ext3, XFS, ReiserFS and JFS. now i would do ext4 and btrfs instead of ReiserFS and JFS.

what i found is that ext3 lost around 5-10 files out of every million, XFS around 5-30 per million and both Reiser and JFS went to several hundreds, a thousand lost files in at least one case.

so in my testcase, yes: ext3 was the most resilient filesystem, but XFS wasn't so far as I feared. And given that i was approaching ext3 8TB limit, the clear answer was XFS.

I plan to use the slow holidays season to repeat with more modern filesystems; I have high hopes for ext4, but won't bet my data until I see how it performs under real failures. btrfs will be a fun test, but don't thing it's mature enough yet.