Preferably far far away from the cloud servers that perform workloads on that storage, so you can enjoy high latency plus ingress and egress costs to access it?
None of these costs are valid in isolation. The DIY savings is probably not a savings for most use cases.
// Disclosure: 10+ years back, our company undersold S3 at 1/10th the cost and much higher performance, but only for multi hundred megabyte up to multi gigabyte files. S3 does the generalized object store shape of work extremely well, likely better than anyone, and is worth it -- even before adding IAM, local querying, and all the other S3 goodies when native AWS. While underselling them for storage we operated, we also used them for our own storage needs.
Well there you have it - don’t use cloud severs either ;) Also ingress is free and if that’s your requirement a lot of time you can colo in the same locale/building as cloud region and plug directly into their network
Latency to in-region storage is not minimal like using SSDs on the same server. Often data in the same region can be at different physical data centers.
True, but SSDs don't provide instant access to petabytes.
Also, it is not well known that for a long time you could colocate in the same facility or in the same biz park and enjoy LAN latencies to that campus' AZ of your CSP.
None of these costs are valid in isolation. The DIY savings is probably not a savings for most use cases.
// Disclosure: 10+ years back, our company undersold S3 at 1/10th the cost and much higher performance, but only for multi hundred megabyte up to multi gigabyte files. S3 does the generalized object store shape of work extremely well, likely better than anyone, and is worth it -- even before adding IAM, local querying, and all the other S3 goodies when native AWS. While underselling them for storage we operated, we also used them for our own storage needs.