The reason is because object storage is slow and not meant for high performance, which is usually important for large databases.
For your S3 example and ignoring IOPS, you are comparing 13ms of latency on local spinning disk versus 10s-100s of ms latency from S3. SSD is faster with only 1ms of latency or less on average.
Adding IOPS to the equation, you’re likely going to slam your object store if you have a high volume of traffic, where your block storage likely wouldn’t even break a sweat.
I think might work well, at least until the "hot" part of your dataset exceeds the available memory in the cache, unless you made the cache distributed and sharded.
This doesn't solve writes. I guess a writer will write to a memory buffer and only flush to S3 when a block is complete; but that wouldn't work in a multiprocess/multi-node environment if they can't share memory buffers.
For your S3 example and ignoring IOPS, you are comparing 13ms of latency on local spinning disk versus 10s-100s of ms latency from S3. SSD is faster with only 1ms of latency or less on average.
Adding IOPS to the equation, you’re likely going to slam your object store if you have a high volume of traffic, where your block storage likely wouldn’t even break a sweat.