Hacker News new | past | comments | ask | show | jobs | submit login

Well, there are other reasons; you want to write code that operates on the data, and neither the code nor the data fits on a single machine - you have to target an abstraction which spans machines. Block storage is too low-level an abstraction.

That isn't to say that using high performance block storage isn't still a win even when the redundancy is multiplied at a higher level. The higher level redundancy is also about colocating more data with the code - i.e. it's not just redundant for integrity, but to increase the probability it's close to the code.




Block storage can be network-abstracted.

Even virtual memory for that matter. Now ancient concept:

https://en.wikipedia.org/wiki/Distributed_shared_memory


Of course. Most production monoliths are deployed on networked block storage - aka SAN - and NUMA is already structurally distributed memory, even on a single box. But it's not the right paradigm to scale well, no more than chatty RPC that pretends the network doesn't exist is the right way to design a distributed system.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: