Is server RAM any different than consumer RAM with consumer chipsets supporting DIMM & ECC? [duplicate]
After looking at Micron Server DRAM and seeing how massively expensive it is, I went to look for the differences.
This 11 year old post states that there are differences in reliability, ECC support, and "the ability to have them replaced when they start warning of failure rather than after failing" between server RAM and consumer RAM.
This blog is from 2020, stating that most consumer PCs only use 32bit architecture, ECC support, and dual-channel support as the differences.
And all the other posts here asking if server RAM works with their PC with/without ECC.
So, it seems like the data is either outdated or wrong:
- Consumer motherboards have started supporting ECC (like my X570).
- I doubt the differences in reliability (lifetime) since it seems they're all made mostly the same.
- I don't know why anyone would be running 32bit for ANYTHING other than support some random legacy software that you could run anyway since 64bit is backwards-compatable.
- I don't know of any motherboard made this decade without dual-channel support.
So now I'm thinking the differences are only: capacity per stick, and the supposed 'failure warning' feature that that one post alleged 11 years ago, and maybe reliability. And cost. Mainly cost.
Has there been new advancements I don't know about? Why is Micron Server DRAM so expensive? Why not just use consumer DRAM and motherboard with ECC and DIMM support (other than needing more DIMM slots)?
Edit: This was suggested, saying Intel phased out ECC in consumer products. That could explain why server DRAM is expensive (since ECC is now much more of a premium spec). Now, does this mean that RAM with ECC is now just 'server ram'? Looking at one of Micron's distributors shows that the only notable difference in upgrading to this kind of RAM is their warranty, it's cheaper(?), and extended lifetime.
I'm going to call this solved. Server RAM basically just gives you peace of mind.
Solution 1:
When it comes to modern memory, there are 4 factors to consider:
-
ECC vs. non-ECC. ECC has 10 bits for every byte to store parity. That means 25% more chips and thus 25% more cost.
-
unbuffered vs. registered vs. fully buffered. Desktops use unbuffered memory. Servers typically use registered or fully buffered memory. Registered memory has a buffer on address lines. This adds latency but reduces load on the memory controller and allows for bigger memory modules to be used. Fully buffered memory also has a buffer on the data lines.
Those additional buffering chips add to cost.
- Width (x4 vs. x8) This is to do with how the memory bus is arranged to the chips. Some of the more advanced memory safety features require x4 arrangements.
This means that for x4 arrangements you may need twice as many chips on the DIMM.
More chips - more cost
- Ranks RAM can typically be single, dual or quad ranked. Separate ranks are roughly equivalent to separate slots on the same memory channel.
More memory per rank puts more load on the memory controller, so server memory tends to have more ranks.
More ranks, more chips, more cost.
This is why server memory is more expensive. Spec-for-spec, there is generally no significant difference in cost.