Blockchains are, by design, incredibly inefficient. I'm not talking about mining; take storage as an example. The consensus may be distributed, but the data is purely redundant. The majority of nodes in the network will have to hold a complete copy of the entire chain. If Web3 takes off, it will become more difficult (read: expensive) to operate a node in the network -- doesn't that seem backwards? I haven't seen a protocol address this in a meaningful way, it seems to be shrugged off as a bridge to be crossed at a later date. Storage is cheap! But I don't see how these systems could scale to be truly impactful while remaining so inefficient.
I think the OP means less about perf and more about dollar cost since you have to either be able to trust the uptime of a storage node (and likely pay more) or make more copies (and therefore pay more) in order to be guaranteed your data isn't lost when a node goes down.
The performance of splitting up files and storing them on multiple nodes tends to be very good since you're not as bottlenecked to one node feeding out all of the bytes (think about bitorrent and how fast it is to download a well-seeded file). That said, network egress might become a problem if the node you're seeding content from lives within AWS or another cloud provider. I don't think this egress price penalty exists in any major ISPs that I know of.
The comment you’re replying to is poking fun at a gag project; but I agree with you as well.
The ETH chain was about 500 gigs over the summer when I synced a full node, but it was growing fast. The amount of data stored is monotonically increasing for every full node in the network. You also need fast storage, you can’t use dirt cheap spinning platters. Maybe it won’t grow faster than storage cheapens, but I’m not sure I’d make that bet.
Current Ethereum is a non-scaling proof of concept. They kicked that can down the road, but the blueprint solution is now in and many of the components are built.
The short of it is that with the modular approach to Blockchain scaling, more nodes will mean more scale. Separating out execution, settlement, and data availability, the blockchain "trillema" is inverted.
The writing is a bit technical but I recommend that of @polynya to understand more.
This is what I'm thinking too. By default, you'll always want to store more copies of the file since you can't trust the uptime of the nodes in the same way you can trust the uptime of an EC2 instance or node backing S3. I think SLAs on trust of an individual node in the network are needed for storage to work.
Compute, on the other hand, seems to have a really bright future with web3 as long as the workload/task is parallelizable (for instance transcoding a video which has been broken up into many HLS segments). This is at least based on how I work it out in my head, but I'd love to have someone school me who might know more!
With proof of stake, a complete chain history is not strictly required by validating nodes.
You don’t want everyone to delete it, but it’s not required for consensus.
The consensus among distributed systems people without a financial stake in it seems to be that practical proof of stake remains an open problem. Obviously no one that owns SOL or DOT or ADA or whatever is going to say that, but I’ve looked hard for a scalable, secure, reasonably cost effective PoS L1 and come up empty so far.
There isn’t a Cardanos consensus protocol per se: there are a family of them under the umbrella term “Ourobouros”. In production it’s the BFT variant which is sort of a warmed over tendermint or algorand.
The more ambitious variants are very mathematically rigorous but axiomatize a wall-clock oracle, as well as in some cases mathematically-interesting but practically absurd assumptions about synchronicity.
Hoff has deployed his private fortune doing (among other, uh, things) serious research on distributed Byzantine consensus.