That's the catch. S3 is ideal for when the sum total of your blobs can't easily fit on local storage - unless you want to use a NAS, SAN or something else with a load of spinning rust.
Storing your data on a single HDD you got off NewEgg will always win if you only use the one metric of $/GB.
S3's main draw isn't $/GB. It's actually more like ($/GB) * features
E.g. 12-9s of durability, Lambda events, bucket policies, object tags, object versioning, object lock, cross region replication
Doing that with anything over 100TB starts to get very expensive very quickly. Especially if you need that data for, you know, your business to survive...
> Storing your data on a single HDD you got off NewEgg will always win if you only use the one metric of $/GB.
That right there is the mistake you're making. Storage is not the only way that AWS charges you for S3. You're also billed for stuff like each HTTP request, each metadata tag, and data transferred out once you drop off the free tier. You're basically charged for every time you look at data you put in a S3 bucket the wrong way.
I strongly recommend you look at S3's pricing. You might argue that you feel S3 is convenient, but you pay through the nose for it.
Another pain is the testing story. I just want to be able to write to a FS. There are S3 fuse bindings, though. Maybe I'm just a dinsosaure these days.
That's the catch. S3 is ideal for when the sum total of your blobs can't easily fit on local storage - unless you want to use a NAS, SAN or something else with a load of spinning rust.
Storing your data on a single HDD you got off NewEgg will always win if you only use the one metric of $/GB.
S3's main draw isn't $/GB. It's actually more like ($/GB) * features
E.g. 12-9s of durability, Lambda events, bucket policies, object tags, object versioning, object lock, cross region replication
Doing that with anything over 100TB starts to get very expensive very quickly. Especially if you need that data for, you know, your business to survive...