The 4TB M.2 SSDs are getting to a price point where one might consider them. The problem is that it's not trivial to connect a whole bunch of them in a homebrew NAS without spending tons of money.
Best I've found so far is cards like this[1] that allow for 8 U.2 drives, and then some M.2 to U.2 adapters like this[2] or this[3].
In a 2x RAID-Z1 or single RAID-Z2 setup that would give 24TB of redundant flash storage for a tad more than a single 16TB enterprise SSD.
On AM5 you can do 6 M.2 drives without much difficulty, and with considerably better perf. Your motherboard will need to support x4/x4/x4/x4 bifurcation on the x16 slot, but you put 4 there [0], and then use the two on board x4 slots, one will use the CPU lanes and the other will be connected via the chipset.
You can do without bifurcation if you use a PCIe switch such as [1]. This is more expensive but also can achieve more speed, and will work in machines without bifurcation. Downside is it uses more W.
Right, and whilst 3.0 switches are semi-affordable, 4.0 or 5.0 costs significantly more, though how much that matters obviously depends on your workload.
True. I think a switch which could do for example PCIe 5.0 on the host side and 3.0 on the device side would be sufficient for many cases, as one lane of 5.0 can serve all four lanes of a 3.0 NMVe.
But I realize we probably won't see that.
Perhaps it will be realized with higher PCIe versions, given how tight signalling margins will get. But the big guys have money to throw at this so yeah...
Best I've found so far is cards like this[1] that allow for 8 U.2 drives, and then some M.2 to U.2 adapters like this[2] or this[3].
In a 2x RAID-Z1 or single RAID-Z2 setup that would give 24TB of redundant flash storage for a tad more than a single 16TB enterprise SSD.
[1]: https://www.aliexpress.com/item/1005005671021299.html
[2]: https://www.aliexpress.com/item/1005005870506081.html
[3]: https://www.aliexpress.com/item/1005006922860386.html