I'd be leery of dismissing the potential consumer demand. That much throughput could be put to good use for a myriad of personal and business functions, and software trends to filing whatever hardware can provide. It's like every prediction about users not needing X amount of ram or cpu or dpi or network or storage space.
Having that much throughput suggests paging and caching across multiple disks, or using giant models (ml or others) with precomputed lookups in lieu of real-time generation. At any rate, all it takes is a minor inconvenience to overcome and the niche will be exploited to capacity.
> Having that much throughput suggests paging and caching across multiple disks, or using giant models (ml or others) with precomputed lookups in lieu of real-time generation.
Those sound to me like situations where latency will be the bottleneck, even on something like PCIe 3.0 x4.
I can currently only name one consumer use for these super high speed data transfers, and that's loading 3d assets and textures from SSD to GPU in real time based on where you're looking.
Having that much throughput suggests paging and caching across multiple disks, or using giant models (ml or others) with precomputed lookups in lieu of real-time generation. At any rate, all it takes is a minor inconvenience to overcome and the niche will be exploited to capacity.