Safety of write cache on SATA drives with barriers
For straight up Enterprise systems, there is an additional layer in the form of the storage adapter (almost always a RAID card) on which yet another layer of cache exists. There is a lot of abstraction in the storage stack these days, and I went into deep detail in this in a blog series I did on Know your I/O.
RAID cards can bypass on-disk cache, some of which even allow toggling this feature in RAID BIOS. This is one reason why Enterprise disks are Enterprise, thier firmware allows such things that consumer drives (especially 'green' drives) don't. This feature directly addresses the case you're concerned about: power failure with uncomitted writes. The RAID card cache, which should be either battery or flash-backed, will be preserved until power returns and those writes can be recomitted.
Certain enterprise SSDs include an onboard capacitor with enough oomph to commit the onboard cache before fully powering down.
If you're working with a system with disks directly connected to the motherboard there are fewer assurances. Unless the disks themselves have the ability to commit the write-cache, a powerfailure will indeed cause a loss. The xfs filesystem earned a reputation for unreliability due to it's inability to survive just this failure mode; it was designed to run on full up enterprise systems with engineered storage survivability.
However, time has moved on and XFS has been engineered to survive this. The other major Linux filesystems (as well as ntfs on Windows) already had engineering to survive this very failure mode. How it's supposed to work is that the lost writes will not show up in the FS journal and it'll know they didn't get comitted, so corruption will be safely detected and worked around.
You do point to the one problem here: disk firmware that lies. In this case the FS journal will have made a wrong assumption versus reality and corruption may not be detected for some time. Parity RAID and mirror RAID can work around this as there should be another comitted copy to pull from. But single disk setups won't have that cross-check, so will actually fault.
You get around the firmware risk by using Enterprise-grade drives that get much more validation (and are tested versus your presumed workload patterns), and designing your storage system so that it can survive such untruths.
The filesystem journal originally waited for the write to the journal to complete before issuing the write to the metadata, assuming that there was no drive write cache. With drive write caching enabled, this assumption is broken and can cause to data loss. Thus, barriers were created. With barriers, the journal can make sure that the write to the journal completes before the write to the metadata, even if the disk is using write caching. At the disk driver layer, the barrier forces a disk cache flush before subsequent IO is sent down, when the drive reports that it has a write cache and it is enabled. Otherwise, this is not needed, so the barrier just prevents issuing of the subsequent IO to the drive until the previous IO has completed. NCQ just means it might have to wait for more than one pending request to complete before issuing more.