How is LSI FastPath different from Software RAID?
This question pertains to SSDs on RAID levels without parity (like RAID 0, 1, 10).
The recommended settings for FastPath are to set the Write Policy to Write Through, Read Policy to No Read Ahead and IO Policy to Direct. This disables the cache on the RAID controller and the requests directly hit the SSDs.
Doesn't software RAID do the same thing already? How is FastPath different from it then?
Edit:
This question might seem to be a duplicate of Software vs hardware RAID performance and cache usage but that question is broad and talks about Software vs Hardware RAID in general. Almost all answers there say hardware RAID without a cache is useless.
This question is about why hardware RAID w/ FastPath (and w/o cache) is better than software RAID when it comes to SSDs.
Solution 1:
To tell the truth, it seems that LSI does not provide much details on its FastPath tech.
Anyway, some information can be gathered from the DELL docs:
- from DELL PERC H710P controller brief:
Dell’s FastPath™ software feature enables the use of the second core on our PowerPC chip to accelerate write-through I/O, which significantly enhances SSD performance.
- DELL PERC H710P user manual:
FastPath is a further enhancement of the Cut Through IO (CTIO) feature, introduced in PERC H700 and PERC H800, to accelerate IO performance by reducing the IO processing overhead of the firmware. CTIO reduces the instruction count required to process a given IO. It also ensures that the optimal IO code path is placed close to the processor to allow faster access when processing the IO. Under specific conditions with FastPath, the IO by-passes the controller cache and is committed directly to the physical disk from the host memory, through the second core of the dual-core RAID-on-Chip (ROC) on the controller. FastPath and CTIO are both ideal for random workloads with small blocks. Both CTIO and FastPath provide enhanced performance benefits to SSD volumes, as they can fully capitalize on the lower access times and latencies of these volumes. FastPath provides IO performance benefits to rotational HDD-based volumes configured with Write Through and No Read Ahead cache policies, specifically for read operations across all RAID levels and write operations for RAID 0.
It's worth noting that, based on DELL docs, FastPath only works on RAID 0,1,5 and 6, but it can accelerate writes only on RAID 0 and when the IO size is smaller than the array stripe size. This last requirement let me think that what FastPath really do is a DMA transfer from host memory to the physical disk, bypassing all on-board firmware processing.
Back to your original question: if any, FastPath seems to make HW raid more similar to SW raid, in the sense that it bypass most of the specific hardware processing done by the RAID card. This is because in very specific scenarios (many small random reads/writes) hardware RAID can under-utilize an SSD array. This stems from the fact that traditional RAID controller were tailored for rotating media, with high latency and relatively good bandwidth. On the other side, SSD improved latency proportionally much more than bandwidth: this means that fast, large controller's cache have much lower of a performance impact, while it is very important to keep controller latency to the minimum.
Please note that power loss protected controller's cache remain very important in preventing data corruption/loss, but this is very well explained in the other SF thread you referred.