Hacker News new | past | comments | ask | show | jobs | submit login

For anyone curious what the merge means, in my limited understanding:

1. Transactions will get slightly cheaper, but only because they're increasing the block rate from every 13 seconds to every 12 seconds, not anything specific to the other cool stuff in the merge

2. The network will switch from proof-of-work to proof-of-stake, meaning that there will no longer be GPU demand or substantial energy consumption attributable to the Ethereum network.

3. For a cryptocurrency to be secure there has to be some barrier to participating in consensus. Now, instead of having to have a fancy GPU to run an Ethereum validator, the barrier will be that you instead have to "stake" 32 Ethereum (and risk losing it if your validator misbehaves). A staking reward of (I think) 5% a year will be issued for your trouble.

4. The Ethereum network will be more resistant to "short-range" forks, that is, forks that diverged from the proper chain "recently". (More resistant in the sense that it will be more expensive to execute an attack like that.)

5. Once you've staked your 32 Ethereum, you currently can't "unstake" it. The ability to withdraw your stake will be added in a future eth fork. It will have to be gated by some delay (maybe you can only withdraw your stake 6 months after you staked it). Attacking the chain via a "long-range" fork, that is, a fork that diverges from the proper chain longer ago than the withdrawal period, will be much cheaper or possibly even free.

6. Ethereum will still favor liveness over consistency. An attacker can't stop the chain, but they can prevent it from finalizing for a time (at expense to themselves).

7. Token issuance will go down, probably to below the burn rate, so the base Ethereum supply will go down over time

8. The chain will get deterministic finality after some number of blocks (I think something like a day's worth). That means that, once a block is finalized, it will never be rewritten. Network outages or attacks can prevent blocks from finalizing.




> to be secure there has to be some barrier to running a validator

There only has to be a barrier for receiving rewards. Bitcoin full nodes validate everything but receive no rewards, and need no barriers.


True, my phrasing was misleading (I used the Ethereum terminology, where a validator is not just any node that validates the chain, but one that participates in consensus). Edited my comment accordingly


And the barrier of 32 ETH.

I run one of the eth validator serious for LIDO. We have 10 validator keys for testnet (prater/goerli now just Goerli) and around 1000 for mainnet ethereum on the beacon chain (which will be the consensus chain post mainnet merge).


For those curious, 32ETH is between $1-3k USD depending on where you're temporarily located within the last 6 months, currently ~1.8k.


?? no thats closer to $58k USD..


OP probably made the mistake because if you Google "32 ETH in USD" it shows you how much 1 ETH in USD is.


I don't see how this reduces energy consumption. To buy into being a validator node you need Ethereum, to get Ethereum you need to do proof of work, if the number of people doing it falls then mining Eth will get easier, and the most cost-effective investment in Eth would be running a large number of validator nodes to collect the payment (you know, all sitting in the same DC). So of course expanding your mining operation still makes sense since it will fund the increase in your validator node operation.

Why would anything change when the economic incentives haven't? Mining isn't getting less lucrative or necessary.


You can buy ethereum on an exchange. Essentially they're changing the economic commitment required to participate in the consensus from demonstration of computational work to a monetary stake.


The end state of that is that all Ethereum winds up being staked. There's no reason not to stake Ethereum if there's no Ethereum inflation, and so the only new source of Ethereum would be PoW-mining.


PoW-mining is being turned off. The source of new ETH will be ETH holders selling ETH to meet liquidity needs. People cannot save 100% of their income forever.


Most of it probably. And the more tokens you have the easier is to put majority of them in the stake, also any fees are irrelevant on the large scale operations, unlike the small transfers. So The biggest holders of tokens become the biggest stakers, and thus become the biggest earners. Since the rate of token accumulation will only increase because the of compounding of the tokens in the stake (new profits will be staked too).


Won't it end up being deflationary under pow?


Honest question - if you get deterministic finality in a day (#8), then how can you attack the chain with a "long-range" fork (#5). Doesn't #8 solve #5?


The answer as I understand it, is that anyone who's continuously running a node won't be confused by such a fork, but someone who just joined the network has no way of telling which fork is the true one.

Put another way, if you know the true chain at time T, you also know it at time T+1. But if you're just joining the network, or you went offline for a bit, that means you don't know which chain is the true one and need some outside-of-protocol way of determining which fork to trust.


What can users do to get certainty over which one is the correct fork? In POW you can check the POW. Is there a trustless solution for this in POS? Or is the only solution essentially to ask around and hope that people aren't lying to you?


In practice, it's the same as with Bitcoin: you have to get the correct, current software. It's just that the software will include a block hash from a few months back.

You might argue that Bitcoin is defined as the chain with the most hashpower, period. That would remove all subjectivity from Bitcoin, but it would mean that a 51% attacker could arbitrarily change the rules and steal people's funds. That's not how it actually works; a 51% attacker still has to follow the rules of the protocol for their blocks to be accepted by the non-mining nodes, and that means there's social consensus on the correct software to run the protocol.


That's not how it works. Hashpower decides the canonical chain, not the rules of the system. Hashpower makes sure that the blockchain can't be rewritten. Hashpower can't change the rules of the system, because those blocks would be invalid and rejected by the network.

There's no consensus needed on which rules to use. Everyone can use whichever rules they want, by using different versions of the software. Different rules define a different currency, like euro or dollar. Using the best currency with the best rules is just a game theoretic focal point. Everyone chooses to use the best version of the software, because they assume that everyone else does so too, even in the absence of communication. There is no "correct, current" software in Bitcoin, because it would be a single point of failure.

There's no objective protection against long-range attacks in PoS, because there's no hashpower to prove the canonical chain. It requires the provider of the "correct, current" software to decide which chain is the right one.


>>Hashpower decides the canonical chain, not the rules of the system.

No, hash power and the rules decide it. If you have invalid signatures in your blocks, it doesn't matter how much hashpower your fork has, it won't be accepted as canonical by other forks.


*by other nodes


51% attacks can not arbitrarily change the rules nor can they steal funds. They can double spend or prevent finality. Nodes enforce the rules of the network by rejecting blocks they deem invalid.


It’s the same as with Bitcoin:

- wait a bit to make sure that you can talk to different people on the network and see what each of them see

- check checkpoints on twitter or websites like etherscan (are they seeing the same thing I’m seeing?)

In projects like Mina, since you do not download the history of the chain (there’s a single zero knowledge proof of a few kB that covers the whole history) you must rely on a marker for “chain quality “ to differentiate potential forks.

Note that there was also some research on how to get signal from the transactions you see that you’re on the correct fork (from some ex colleagues working on libra): https://eprint.iacr.org/2019/1440.pdf


If you're running a node, and it finalizes a block, that means your node will refuse to ever revert that block.

A long-range attack is when a validator withdrawals their stake, waits the withdrawal period (e.g. the 6 month delay delay mentions above), and then creates a fake chain starting from before they withdrew their staked eth.

Because in the "real" history (e.g. the ones that most nodes have seen over the past 6 months) the validator doesn't have Eth locked up still, there's no way to punish them. Thus, these long range attacks get very cheap (you could even imagine someone who pays validators for old keys -- aka, you don't even need to be a validator yourself).

These two facts together mean that PoS blockchains require some "weak subjectivity" - which pretty much means when you download and start syncing your node, you need to know a "finalized" block hash from the past 6 months (or within the withdrawal delay). This ensures you won't get tricked by a cheap long-range attack.

In practice, I don't think this will be much of a problem - clients can just do a new release with a new block has every few months for new users!


There must be some part of the attack I am missing here.

Let's say I'm an attacker and I do this. But now none of the subsequent blocks on the original chain will validate anymore because of hashes mismatching. I could of course validate new blocks but 6months+ of blocks from only one or a small number of validators (depending on how much validator capital one can amass for the attack) is going to be pretty obvious.

Unless the attacker can get a significant percentage of validators part of the attack, I can' see it work out in practice.


What do you mean by obvious? Your crypto client is a computer and unless you go and check what it is connected to this will be opaque to you (and so not obvious).


Because they don't actually have "deterministic finality" in any meaningful sense. It's not possible.

PoS is, as usual, vulnerable to attacks like grinding, and you can only paper over this.


Can you link more details on this? (am n00b, genuine curiosity)


Long-range fork is possible if network participants are manipulated or forced to switch to another fork. In practice, the correct fork is communicated from entities who control the majority stake, such as exchanges. In the case of network disruptions, there might be multiple valid forks, and choosing the correct one is arbitrary, because there isn't one. There's no objective consensus similar to proof-of-work.


It sounds like there is if you wait for the deterministic finality


By the way I was under the impression that Ethereum will still have forks, and thus probabilistic finality


Ethereum's PoS is really two consensus algorithms. The first converges rapidly but has probabilistic finality. The second has true finality, in the sense that a finalized block can't be reverted without destroying a large percentage of total stake, but it takes a while before that finality kicks in.


Most of these are pretty spot on. The one thing I will note is that the staking rewards are not a constant 5%, they are a function of the network demand. Stakers are rewarded a portion of transactions, so the volume of demand per block will increase the gas price, and the validator rewards. If the number of validators on the network increases, each individuals rewards will drop.

That being said, the adoption of layer 2 technologies has taken off, and significantly impacted the demand for layer 1 blockspace.


Is there anywhere I can read a technical version of how proof-of-stake work?

I'm still confused at what happens if you create a valid block with the Ethereum you stake and then that block doesn't end up being part of the main branch.


You get assigned by a random number generator to create the next block. If you fail to create it your only penalty is the opportunity cost (i.e., you miss on the reward that you would have gotten for creating one).

So at any given time only one validator is going to create a block. If you create it, it's going to be included in the blockchain. If you don't create it (or you create but due to network problems fail to communicate it), then the blockchain skips a beat and someone else will create the next one.

If you want a user friendly introduction I recommend you Ben Edgington's book (WIP): https://eth2book.info/altair/

Or straight from the source: https://github.com/ethereum/consensus-specs


In proof of work the leader election (the thing that decides who gets to write the next page/block of the ledger) is based on how much money you can put in computing power or how big your mining pool is. The more computing power, the more lottery tickets you get to write the next block.

In proof of stake, the more crypto tokens you have, the more lottery tickets you get to win the right to write the next block.

In PoW the lottery is solving puzzles, that’s the energy inefficiency, in PoS the lottery is decided like a real lottery: with a random number generator (a distributed one at that, so that people agree that it was generated honestly)


How do you make an honest distributed random number generator, then?


A bunch of nodes generate random numbers and publish their hashes. After all hashes have been published all nodes reveal their random number, verify that their number actually corresponds to their hash. Xor all the numbers together and as long as at least one node was honest and generated a random number the result will be a random number. I don't recall if this is exactly what proof of work does but I think it's pretty solid.


PoW as used in bitcoin and ethereum is a bit different. It's designed to reduce the communication needed between nodes.

Each block is published with a hash which is based on all the previous blocks plus its own data. All participants are racing to solve a puzzle: basically a big guessing game where the first to guess the winning number gets to make the next block. The game is to find a number which when combined with the previous hash and put through an expensive hash function comes up with a result which has a specific number of binary zeros at the end. Since hashes are one way functions the only way to solve the problem is for everyone to guess until the solution is found.

The first participant to find the solution gets to create the new block. They also check the last few blocks to see if anyone's been cheating by adding fraudulent transactions. If fraud is detected they just ignore those nefarious blocks like they never happened.

The algorithm scales with the number of participants by automatically making the "difficulty" higher by changing the size of the pattern that needs to be matched.


Yeah I meant proof of stake, not proof of work


Sure but I think the question is about the random number generation used to decide in PoS who gets to do the next block.


You should check out drand (https://drand.love), which is based on threshold cryptography and the group running the network is composed of 16 independent organisations, none of which can manipulate or predict the next beacon. Beacons can be consumed from the public HTTP APIs (https://drand.love/developer/http-api/).

Lots of exciting features to land very soon that include Timelock Encryption, among several others.

Technical explainer can be found here: https://research.protocol.ai/tutorials/resnetlab-on-tour/dra... and documentation here: https://drand.love/docs/


Wondering the same thing myself. The entire problem with proof of stake is that the only thing in the digital realm that can be relied on as a source of objective truth is computation, hence proof of work.

How do you get verified (true) randomness without an oracle and without a connection to the analog world (proof of work)?

It would be amusing if the answer was randomness that came from a proof of work algorithm.


An oracle is a totally sensible approach for certain use cases, see the VRF support in chain link as an example:

https://blog.chain.link/verifiable-random-function-vrf/


I don't know how they do it but it seems easy in practice. You hash the concatenation of chunks of data submitted by participants. Adding more data to the hash can only make it more random, not less. As long as at least one participant wants the outcome to be random and therefore they submitted actual random bits, the result will be random no matter how unrandom all the other participants' data chunks were.


You use a vrf (verifiable random function) check what mina, dfinity, celo, and pretty much all the PoS coins dk


    def totally_random():
        return 42
In seriousness, solana’s proof of history is a super novel take ontop of proof of stake to solve this problem.

https://solana.com/news/proof-of-history


I'm not convinced proof-of-stake works at all: https://yanmaani.github.io/proof-of-stake-is-a-scam-and-the-...

It's almost certainly better for the environment / our continued existence than proof-of-work, if we must suffer cryptocurrency, but I don't trust people who were willing to continue to engage with proof-of-work for years to create an ethical (or even functional) alternative.


For anyone who might be concerned by the claim here; it’s really a political argument and not a technical one. Proof of stake as implemented by Ethereum makes a tradeoff which requires some small amount of trust (to identify the correct chain more quickly) which does not create a centralization risk.

But this is always the way; leveraging trust where appropriate can allow great increases in efficiency, this is how society works, so you have a political spectrum where Ethereum is “responsibly pro social” and Bitcoin is still a kind of super trust less money with enormous energy costs to enable this somewhat superficial security property.


Thanks for this. Can you further explain the part about forking being easier from point 5?


For sure. In this comment I'll be using "validators" to mean "nodes participating in consensus".

The "true" chain according to the new protocol is the one that has more signatories. Validators sign blocks as they're produced, and when a block gets more than 2/3rds of the signatures, it's finalized on the chain and won't be rewritten. (Well, it's much more complicated than that, but that's the general intuition.) When you sign a block, you get a reward on that chain, to incentivize people to participate in consensus.

So the question is, when there's a fork, what stops people from just signing blocks on both forks so they get the signing reward no matter which fork "wins"?

The protocol disincentivizes this by requiring validators to put up a stake of 32 eth, that gets "slashed" if you do that. Put another way, if there's a fork and you sign blocks on both sides of the fork, you lose part of your stake.

But that threat only works if you actually have a stake to lose. If it's been ${withdrawal_delay} months and you no longer have any eth staked, you can start signing blocks from a long time ago and the protocol can't slash you as punishment. So a group of 2/3rds of the former-validators could freely start a fork from ${withdrawal_delay} months ago, sign a bunch of blocks on it, and if their fork loses they'd face no penalty. Someone currently participating in consensus can see this is happening and won't be confused, but someone just joining the network will be.

The solution is pretty easy, if inelegant. When entering the network for the first time, if there are multiple competing forks, you'll just find someone you trust IRL and ask them which chain is the real one. If you've been offline for less than ${withdrawal_delay} months, you won't have to worry about this, but it is a problem for fresh users.

If you set ${withdrawal_delay} to infinity, this isn't a problem, but probably you don't want to do that. Once withdraws are enabled, there'll be some tooling to make it easy to figure out which chain is the real one, but I don't expect there'll be many forks that are remotely convincing.


Thanks!


In theory, a very large staker could run a large share of the network, exit (either normally or get slashed), then rewind the chain prior to the exit, run it on their own independently, and then pop up after a while with a competing fork. This makes syncing from the beacon chain's genesis risky because you could end up following the attacker's fork.

To make up for this risk, Ethereum then relies on distributing recent 'weak subjectivity' snapshots (through other known nodes, block exploers, baked into client releases, etc) to make sure new folks can join the legitimate network and ignore an attacker's. Those snapshots basically rely on social legitimacy to help folks get going with the 'legitimate' chain. It is a trust assumption.

In contrast (at least in theory), in a proof of work network you can 'objectively' determine which of competing forks of a chain is the legitimate one by a simple metric, the 'longest chain with the highest difficulty'. The presumption made here though is that you will have an open internet and honest client software that will not censor the legitimate chain. A trust assumption is still made that those that introduce you to the p2p network aren't hiding a longer chain from you.


Something to keep in mind is that you already mostly need such trust.

You are using that "social legitimacy" to know you are downloading the legitimate software, or viewing the legitimate source code, or documentation.

Your computer can't calculate if ethereum.org is the legitimate "Ethereum", or it's ethereum.io (just making it up).


1. Transactions won't change noticeably since the gas costs won't change and the network won't have more capacity.

2. The 'demand' from mining for GPUs is over-rated. It was a brief problem years ago. You are correct that energy usage for validation will go down.

3. One difference between bitcoin and eth... ethash is a memory hard algo so it doesn't require the fastest GPU. 4-5 year old GPUs are more ROI efficient. Everything is bound in the speed of the memory controller.

4. To be seen.

5. 'staking' today is just depositing ETH into an ETH1 contract that doesn't have a withdraw function. It will require forks to add that functionality.

6. Correct.

7. Correct. Although this likely won't have an impact on price like people think.

8. A fork could always change things.


2) The latest earning from NVIDIA and AMD showed that about 30% of their GPU were to crypto miners, so no, the shortage was not an exaggeration and certainly not from many years ago.


The shortage was due to demand from people beyond mining wanting GPUs (including a new problem... scalpers) and there not being enough production. In effect, it was a manufactured shortage. There are a ton of articles that talk about all the aspects that went into the shortage and miners are definitely not the only reason. Easy scapegoat though...

https://www.pcmag.com/news/inside-the-gpu-shortage-why-you-s...

https://gamerant.com/nvidia-gpu-shortage-not-due-crypto-mini...

https://www.pcgamer.com/why-crypto-mining-wasnt-the-only-cul...


Link #1 and #3 mention crypto as one of the causes of the shortage.

Like #2 is a statement by Nvidia who has a vested interest in lying, and has lied about the exact same issue so blatantly that they got fined by the SEC for it.

https://www.theverge.com/2022/5/6/23059930/nvidia-sec-charge...

Scalpers only come in when there's a shortage already, they don't cause shortages except at launch, they just delay items getting to the real users.

There's image and video evidence of dozens to hundreds of GPU mining rigs in a single room if you care to look.

Eg. https://cdn.discordapp.com/attachments/788512140322406473/90...

https://old.reddit.com/r/pcmasterrace/comments/r39ph3/found_...

https://old.reddit.com/r/pcmasterrace/comments/wganva/posted...


I just gave some random links I googled... feel free to keep digging, there are more that back me up.

You're wrong about scalpers too. They know the market... of course they know to prey on every opportunity.

Your 'evidence' is really nothing. That's a few thousand gpus... not that many, honestly. They are also not top of the line gpus... which gamers want to buy. older rx470 polaris class gpus are more than enough for large mining farms.

full disclosure: i'm a huge gpu miner, so i definitely have some experience in this area.


Then you have a vested interest in playing down the effect that this waste of energy bubble had/is.

> They are also not top of the line gpus... which gamers want to buy. older rx470 polaris class gpus are more than enough for large mining farms.

Way off the mark there, gamers not in the 1% don't buy top of the line GPU (as reflected in the Steam survey), they buy the budget to mid range price. When miners started buying GPU left and right in pallets, NVIDEA and AMD saw it and shift the manufacturing to the top of the line GPU where they have the highest margins even if the waffer would yield a lot less, because miners would buy them ASAP anyway. This reduced the amount of lower tier GPUs in the market.

Funny you mention the RX470, a GPU famously never in stock because of this dumb piramid scheme.


> Then you have a vested interest in playing down the effect that this waste of energy bubble had/is.

Nope, not at all. My business model doesn't depend on mining forever. I'm not playing down anything. I'm just saying that mining isn't the only reason (and at most 30% the reason) that GPUs were hard to find.

> Funny you mention the RX470, a GPU famously never in stock because of this dumb piramid scheme.

It came out 5 years ago and newer models replaced it.


What are you going to do with your gpus once the merge happens?


Scalpers cannot create supply problems. Scalpers can only exist when demand is higher than supply.

Scalpers are a symptom of the supply/demand problem. Not a cause.


> Scalpers cannot create supply problems.

Scalpers buy up GPUs. There is less supply. That creates a problem.


>Scalpers buy up GPUs. There is less supply. That creates a problem.

Those GPUs are then sold immediately to real users. Scalpers do not increase demand.

Scalpers provide a service. They have automation or a way to buy stock fast at MSRP or closer to MSRP. Then they sell it to people who are willing to pay more in order to have the product now or avoid spending time to search for stock.

Scalpers cannot exist unless demand is greater than supply.


> Scalpers do not increase demand.

That changes what you said previously and isn't what I responded to. We are talking about supply, not demand.


Scalpers do not decrease supply.

Scalpers do not increase demand.

Scalpers offer a service to users to buy hard-to-find-products now without waiting or searching.


> Scalpers do not decrease supply.

We are going to have to agree to disagree on this one. In the second link, it is in reference to tickets. At the end of the day, tickets are all the same. If you can't get the front row of the show, you'll accept something back a bit... because you want to see the show.

GPUs are different. Miners want specific brands and models and won't / can't accept anything else.

[1] https://www.reddit.com/r/buildapc/comments/pqcrg8/are_there_...

[2] https://news.ycombinator.com/item?id=4195671


How do you explain the very strong correlation of Ethereum profitability and GPU price?

https://www.3dcenter.org/news/news-des-3031-juli-2022


Scalpers.


I took it as the environmental stance not supply when I read it but can see both.


Still the same, the more GPUs are added to the network the harder it is to mine, the harder is to mine the more GPUs are added until it is no longer profitable, all this while drawing more power.


When profitability goes down, miners shut down.


If you have a good understanding of ETH PoS perhaps I can ask this question.

The only issue I’m still confused about is the incentive to add hardware to the network. Obviously the network needs to be resilient to attack on consensus but it also needs networking and physical hardware resilience. How is that incentivized in the PoS?


The stakers have to perform consensus network and block building actions continuously online, or they are penalised by losing value from their stake over time rather than gaining it. The penalty is reasonable so a small downtime during updates and such isn't a problem.

This incentivizes them to have reliable networking and hardware, running 24x7.

However they are heavily penalised if they are seen to do things like double-voting by accident, so they can't just put up duplicate systems and forget about it. High availability failover is something they can only do carefully.

This is different from the current proof-of-work miners. If a miner stops or screws up, they won't gain mining fees during the stop, but they don't lose anything either.


I understand all that but what I don’t understand is what encourages more physical nodes with high quality bandwidth because it seems like if me and you both want to stake for PoS then we can even be better off sharing the same hardware and buying higher quality hardware than we could individually, and this is a sub-linear incentive which I would think would result in the physical hardware footprint significantly shrinking. Am I missing something?

Now you might say well isn’t that a good thing for less resources but without an incentive to decentralize physically won’t there be a risk of centralization and potentially catastrophic outcomes for the network?


One factor fighting against this is that the validator inactivity fees spike to be larger if a lot of validators are offline at once. So if you think all the existing validators are in e.g. AWS us-west, you are taking more risk by putting your validator in AWS us-west too, because your failures are more likely to be correlated with other validator failures.


> what I don’t understand is what encourages more physical nodes with high quality bandwidth because it seems like if me and you both want to stake for PoS then we can even be better off sharing the same hardware and buying higher quality hardware than we could individually.

For the same reason that there are a number of mining companies too. But you're right, PoS doesn't 'fix' that problem.


A common approach is staking pools, which incentivize decentralization for the greater good of the network.

One of the largest ones for eth and several other chains ($7B TVL) is LIDO, of which I’m an eth mainnet and testnet validator node operator. They distribute stake over many high quality validators and have very rigorous operational guidelines to ensure high quality operators only are allowed. I’m not shilling for LIDO but they are a very professional and well ran DAO that truly cares about what’s they’re doing.

One of the cool things they do is liquid staking. In most PoS networks, your funds are locked during a given time boundary, generally referred to as an Epoch. You start staking at an epoch beginning and can not access funds until the beginning of the next epoch. Validators are rewarded for reliably and securely (no double signing!) performing their task via staking rewards. Staking rewards are quite akin to inflation in more traditional monetary systems.


Realistically no blockchains that I'm aware of have any incentives for that. A lot of the Ethereum ecosystem is running in us-east-1.


How do 7 and 4 work together?

If the reward for staking is fixe at 5%, what keeps the issuance rate from being net positive?


The staking reward is for collateralized validators only (a small % of the network)

Overall issuance can potentially be negative due to the burn effects of eip1559 - where the base transaction fee of an eth transaction is deleted from the network forever (akin to using oil/gas/petrol & then it's gone)

If 2>1 then net issuance is negative


Pretty cool visualization site: https://ultrasound.money/


"cool" maybe, but completely uninformative.


Ok, I see that you're now just making repeated negative comments towards me.

It is your opinion that it in uninformative, for you, but for myself and many others, it is quite informative.


Exactly, it's my opinion. What's wrong with sharing my opinion? The whole purpose of this site is sharing opinions and points of view with others. Do you think I'm looking for comments written by you and posting "negative" replies? You're crazy.


5. Staking today also means you're running a validator on the beacon chain, and accumulating a balance that will be respected by the future withdraw function.


True... although there is no absolute guarantee of that.


I thought the network will have 64 shards, increasing capacity by 64x, with finality being periodic and a bit later

whats changed in that plan?


They decided to do proof-of-stake first, putting shards off until later.

Also it started looking like zkrollups could do everything without loss of security or convenience, and with way better scaling than they ever hoped to get with the original sharding plan. So they removed execution from sharding, using it instead as just data for rollups.

Then someone came up with "danksharding" which is more like a RAID system than actual sharding.

The plan now is described in some detail here: https://members.delphidigital.io/reports/the-hitchhikers-gui...


Regarding unstaking it will be capped by a queue of around 900/day similar to what is used for staking. There won’t be an arbitrary delay.


Is the staking reward locked together with the stake as described in 5.?


So, ~$60,000 to participate. That's a bit more than a $200 video card. I guess it's time to price out the poor.


Numerous pools exist where you can combine your eth with others so you can stake .1 eth if you'd like.

Also, this is only 5% returns, which might really be negative if crypto crashed relative to USD.


That's exactly the problem. As someone who does not have access to 32 Eth, you need to trust a pool operator with your money.

This is a direct parallel of issues like home and car ownership. If you're not rich enough, you have to trust a 3rd party, and also get less for your money.


Vote with your money. There are plenty of other blockchains where this is not the case as delegation is part of the protocol.


You don't have to trust a third party. You can run a pool yourself and be the one who needs to be trusted.


If you don't have enough eth to stake, how will you persuade anyone to lend you their eth?


Have friends? People who trust you?


Obviously I need to have friends who already want to invest a significant amount into Eth, otherwise I'm just "that guy" shilling some scheme to their family and friends.


Of course. So what's the problem with that?


Under what possible scenario do I have a few k$ spare, but not 10k$ spare, to invest in a speculative asset, and also have a large number of friends and family already in exactly the same position?

The only one I can see would make me the "cryptobro", to use the currently-popular phrase.

It's also quite ironic that you suggest the basis of Eth consensus be based on the strength of inter-personal trust. If that's how it works, then why bother with the complexities of Eth?


I'm not saying the basis of ETH consensus is based on inter-personal trust. The conversation started by a complaint of 32 ETH required to run a validator being out of reach of normal people, so I provided multiple other options for how a normal person can take part in this.


You gave one suggestion, which was based on inter-personal trust. This removes a major benefit of taking part in the Ethereum ecosystem. If it became a widespread way for people to take part in Ethereum's staking mechanism then yes, Ethereum's consensus would effectively be based on interpersonal trust (which is something that the developers have gone to great pains to avoid).


I gave two suggestions - join a pool, and run a pool.


Both of which (essentially the same thing - "be in a pool") require trust, which destroys a key advantage of Ethereum (and cryptocurrencies in general).


Another stage of the grift where all the scammers get double the coins and cash out even more. The miners aren't going to stop mining. There will be another "classic" fork situation. Maybe we're near the end after all?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: