You can do pretty amazing things with well designed (onprem) networked storage with NVMe drives/arrays.
I replaced the company I was working with at the time's traditional "enterprise" HPE SANs with standard linux servers running a mix of NVMe and SATA SSDs that provided highly available, low latency and decent throughput iSCSI via network.
Gen 1 back in 2014/2015 did something like 70K random 4k read/write IOP/s per VM (running on Xen back then) and would just keep scaling till you hit the clusters 4M~ IOP/s limit (minus some overhead obviously).
Gen 2 provided between 100K and 200K random 4k to each VM to a limit of about 8M~ on the underlying units (which again were very affordable and low maintenance).
This provided very good storage performance (latency, throughput and fast / minimally if at all disruptive fail-over and recovery) for our apps, some of them were written in highly blocking Python code and needed to be rewritten async to get the most out of it, but it made a _huge_ (business changing) difference and saved us an insane amount of money.
These days I've moved into consulting and all the work I do is on GCP and AWS but I do miss the hands on high performing gear like that.
I hope it's worth noting here that a increasing variety of formerly vertically integrated storage systems management layers and more capabilities have become available in VM form with per GB licensing models.
IF you touch health care or finance outside of the trading rooms, Hitachi Data Systems (Ventara but I am not seeing the Hitachi name disappear it's too important I think I'm mindshare I know) the most surprisingly good and common installation. HDS wasn't scared to use OSS and build open platforms.
Big edit sorry
I wandered away from concluding with my own little dream to decouple the block implementation from the fs how DAOS does only with a full selection from the commercial file systems available, paying for usable capacity and not raw installed drive specifications. Paying for peritoneal at enterprise mark-up sucks. Meanwhile not too many people seem to be aware that you can run very small budget and scale file systems that were only available in multiples of house prices only a couple of years ago. The latest all flash NetApp filer is 20k and I don't imagine many people who have the knowledge to debate about the issues can't economically justify that even in a home lab. Executive care options can cost less only it's not many acquisitions that can cost you more just owning but with orders of magnitude difference between the costs.
I replaced the company I was working with at the time's traditional "enterprise" HPE SANs with standard linux servers running a mix of NVMe and SATA SSDs that provided highly available, low latency and decent throughput iSCSI via network.
Gen 1 back in 2014/2015 did something like 70K random 4k read/write IOP/s per VM (running on Xen back then) and would just keep scaling till you hit the clusters 4M~ IOP/s limit (minus some overhead obviously).
Gen 2 provided between 100K and 200K random 4k to each VM to a limit of about 8M~ on the underlying units (which again were very affordable and low maintenance).
This provided very good storage performance (latency, throughput and fast / minimally if at all disruptive fail-over and recovery) for our apps, some of them were written in highly blocking Python code and needed to be rewritten async to get the most out of it, but it made a _huge_ (business changing) difference and saved us an insane amount of money.
These days I've moved into consulting and all the work I do is on GCP and AWS but I do miss the hands on high performing gear like that.
Old stuff now but the links are https://www.dropbox.com/s/rdojhb399639e4k/lightning_san.pdf?... and https://smcleod.net/tech/2015/07/24/scsi-benchmarking/ and there's a few other now quite dated posts on there.