Hacker News new | past | comments | ask | show | jobs | submit login
Segwit2x Bugs Explained (bitcointechtalk.com)
194 points by jwildeboer on Nov 25, 2017 | hide | past | favorite | 90 comments



It's impossible to explain the Segwit2x bugs ignoring the social aspect.

The planned fork was supported by only part of the community, which would not have been a problem if it weren't for the lack of replay protection, which would have lost many people money had the fork been activated. The rationale for not having replay protection was to take over the Bitcoin network as a whole, with the lack of differentiation from the 'legacy' Bitcoin network being seen as a feature. In other words, rich people with a lot of mining power were planning on forcing the network into accepting their own rules (with most people in the network, running SPV wallets, doing so unknowingly), and if people lose money in the process, well, that's just too bad.

And that's just what touched me the most personally. There were a lot of other things in the Segwit2x process to be appalled with, such as the anti-developer culture, the infamous NYA closed meeting and general lack of transparency, and more.

Because I was disgusted with that bully approach, I did not invest any serious time reviewing the Segwit2x code changes, but instead did spend time studying, reviewing and contributing (my very small bit) to the Bitcoin Core code. I guess that other developers similarly had no drive to contribute to the Segwit2x codebase. I also think it's probable some people have found the bug/s and did not report them so as not to help bullies get their way.

In conclusion, the lack of review and testing, leading to the bugs, is not just a technical issue. Doing serious review and testing of open source code relies on support from the community, which was minimal, because very few competent developers in the space who understood what was going on wanted to help.


Segwit2x was only "anti-developer" if you only consider the "bitcoin core" developers. There were many developers quite interested in larger block size for many years, since before the February 2016 "Bitcoin Roundtable" in Hong Kong. Segwit2x had over 90% of the hashpower of the bitcoin world voting for it, at one point a couple months before it was to activate. Hashpower is heavily tilted towards large mining pools, many of them in China, but it is still the best measure of consensus and support the network has. That's a major tenet of Bitcoin in the first place. (Though I wouldn't say "only miners matter" nor "only users matter", the network really needs both to work.)

I think that the survival and modest success of Bitcoin Cash eroded the initial overwhelming support for Segwit2X. Though if you mostly read /r/bitcoin I'd say you have a very warped view of what happened ;)


> There were many developers quite interested in larger block size for many years

Yet, they weren't around when it came time to make it happen.

> Segwit2x had over 90% of the hashpower of the bitcoin world voting for it

Yet, it had a deeply flawed software development process.

> it is still the best measure of consensus and support the network has

Yet the miners stated that they would only mine the fork for 12 hours, do you know why? Because they would have been mining at a (relative) loss. Really undermines this measure of consensus. Maybe the best measure is the market price of the coin.

> I think that the survival and modest success of Bitcoin Cash eroded the initial overwhelming support for Segwit2X.

Basically, Bitcoin Cash is what Segwit2x should've been: an honest, replay-protected coin for people who think a money's value comes from its base layer payment system's transactions per second.


> Yet, they weren't around when it came time to make it happen.

Bitcoin XT had a mechanism for larger blocks in mid 2015. There were multiple BIPs and implementations for larger blocks besides that in 2016. It didn't matter much because of the stickyness of the original/default bitcoin implementation which is controlled by developers employed by blockstream, which was always intended to profit off of "sidechains" enabled by segwit and necessitated by small block sizes. (For anyone less familiar, keep in mind the first versions of the original implementation were written by Satoshi and then maintained by Gavin Andresen, one of the developers now on the "outside".)

Segwit2X was a compromise, arrived at through much effort at the Hong Kong meeting in early 2016 and again at the New York meeting in early 2017. It does appear that any compromise was futile and hopeless.


What developers were included in the NYA? Wasn't it just a bunch of suits? Like, only businesses and miners?


> Hashpower is heavily tilted towards large mining pools, many of them in China, but it is still the best measure of consensus and support the network has.

I'm not convinced by this argument at all. Consensus in the network is derived by the traders who give Bitcoin its value, not the miners. If anything, the miners are on the opposite end of the equation as they have to sell their mined coin to pay for their mining expenses (since silicon and electricity are typically purchased in fiat currency).

Look at the exchanges for consensus, not the miners. The miners are effectively slaves to the system. I don't understand why it's so common to give them so much weight.


First, miners also invest in the future of the coin when they invest in building up a mining operation for it.

Second, if most miners stop mining a coin, it can kill it. At least the next few blocks will take a very long time to be confirmed. Just as importantly, the security of the system is reduced because a large miner could theoretically overwhelm the remaining hashpower and execute a "51% attack" for some short-term profit. It's unlikely, but the confidence in the coin is partly based on it being impossible.


> Second, if most miners stop mining a coin, it can kill it.

This won't happen though. Miners are out to make a profit, full stop. They mine whatever currency is most profitable, so hash power follows price. Bitcoin has over half of the total cryptocurrency market cap. So long as Bitcoin itself is highly valuable, the miners will naturally follow along as well. And as some miners temporarily defect (thus incurring massive hourly losses in opportunity costs from mining less valuable currencies), transaction fees on Bitcoin simply go up higher, making each block worth significantly more reward, thus tempting the defectors with ever-increasing profits to defect back to mining Bitcoin. There is no mining cartel.


> but it is still the best measure of consensus and support the network has

is it? another measure is futures trading. 2x futures traded at ~%15 the value of btc immediately before the cancelation.


That's a measure of what people expect, which is a very different kind of "consensus" from what people want.


Hashpower doesn't matter. Miners follow price which follows economic demand. Mining is at the very end of the chain of causality, not at the beginning or middle of it.

The failure of the Segwit2X fork exemplifies this perfectly. If hashpower truly were as important if you say it were, then Segwit2X wouldn't gone ahead with >90% hashpower support (software bugs notwithstanding). Instead, it was called off for lack of consensus within the community, and the miners stuck around afterwards anyway. That 90% figure isn't nearly as important as you're making it out to be.


I would say it's a circle of causality with no beginning or end.


"Essentially, even one or two weak reviews in a chain of reviews can break the entire consensus system with a catastrophic bug. Hopefully, this can be an object lesson in making sure critical changes are reviewed very thoroughly. Stay safe and go thank the developers that do the hard work of not just coding, but reviewing."

Amen to that. Doing a proper review for any software is so hard, so not fun and often misunderstood and unappreciated (by management).

And then when shtf you also get the blame for your "weak review".


Also, please, keep your PRs small and fixed in scope. This particular change snuck in on a large PR, to which it was added only fairly late on the review process.


There is a very simple solution to that: large pull requests -> automatic fail.

One project I worked on some guy presented me with his masterpiece, a ridiculously large refactoring of the entire code base and clean-up of old code that was no longer used. He was really pissed off when I categorically rejected the request and asked him to break it up into manageable pieces.

Quite sad because obviously he put a lot of effort into it but he did not keep in mind that it's one thing to do this, it is an impossibility to actually review it.

When coding keep in mind the job of the people coming after you: review, QA, eventual long term maintainers and so on.


This! Large refactoring can and should be done, but reviewer should be warned in advance and goals of such rewrite should be very clear. And after the rewrite, code should be treated as alpha and properly tested. And rewrites should be rare events.

Anything else should be done incrementally, in small manageable commits.


You just sound lazy and apathetic. If you actually cared you would have worked with him. You could have done an in-person review where he briefs you on the bigger changes and then you can spin off and review the dead code removal and benign refactor pieces yourself. So what if it takes you an hour or two or four; it probably took him a few weeks. Let me guess, the PR was about 3-5k lines?

You can take as much time as you need in order to give a thorough review and express your frustration so that it's clear next time he should strive for something more manageable, but catagorically rejecting a large code change is just silly and juvenile.

This is why I hate agile. It's conditioning people to believe the only thing you can do to a codebase is incrementally add features. Sometimes the most meaningful or beneficial work comes in the form of broader systemic changes to a codebase.


> You just sound lazy and apathetic.

And it sounds like you are quick to rush to judgment.

You're essentially accusing me of doing exactly what you are doing. You could have instead asked more about the context within which this all happened (which was a pretty bad situation to begin with) and if I tried to work with the person submitting the pull request.

Instead, you make me out for lazy (which is something that I don't think anybody that knows even a little bit about me) and apathetic (which also doesn't fit the bill, rather the contrary).

You can take as much time as you need to do a review if you have that time. The middle of a house fire is not the right moment to push through your pet project, and it also doesn't mean that if I point blank refuse a pull like that that I had not thought it through.


I said you _sound_ lazy and apathetic... if there was more context then maybe you should include it in your argument in the first place (which is all I'm responding to, it's not personal). All I have to go off of is a post where you describe a coworker submitting his "masterpiece", you admitting it was clear a lot of hard work went into it, and you rejecting it because it's too big. That, alone, is rather annoying IMO. As someone who will both review and submit bigger PRs that try to move the needle in ways that aren't possible with a 200 line diff, I would be extremely frustrated with your stance of "too big can't merge" with no other context. We've all been there under pressure faced with the decision of whether we can responsibly merge something or not. I'm sure you made the right call for your situation. But you suggested _categorically_ rejecting large PRs. I'm not the only one that took issue with that and it's much different than the scenario you've now described (lots of problematic, hard to parse, code with a tight deadline).


How are you supposed to do a real refactor then?


When a refactor cannot be done in an iterative fashion — there are some changes that are just too fundamental to be done in small pieces — then one approach is to write a review guide, document what has changed and why, indicate to reviewers where the pain points are.

One argument against gradual refractors is that they are often left half done when enthusiasm runs out. I’ve seem mloc sized code bases with multiple layers of half done refactors and migrations.


That might be an argument against gradual refactors, but it is not an argument for doing large refactors in one pass, as it does not solve the underlying problem of running out of enthusiasm (or time, for that matter).


Agree for sure. Especially management enthusiasm


You need an amazing test suite, for starters. If the test suite is good enough then even a large refactor isn't that scary.

This was part of the problem for the Segwit2X code: inadequate testing. The original Bitcoin Core codebase is up to a very high standard, and that high standard seems to be the first thing to go anytime someone goes to fork it. This is part of the reason why the developers of Bitcoin Core have so much influence to set future direction of the currency within the Bitcoin community at large; there aren't that many highly-skilled cryptocurrency developers out there. It's unsafe to launch a fork without them.


No one said it was easy...

I'm putting off a very sizable one at work (due to important functionality going in before FCS), and luckily, although it'll hit nearly every file, I can do them individually.

My concern, actually, is eyes glazing over on the n'th review that's vastly similar to the previous n-1.


Is it actually better to do it one file at a time. Every single change has to be reviewed anyways.


That is fine when it is possible, but many changes cannot be localized to one file.


You mean how do we do large refactorings? There is no such thing as a large refactor.

Refactoring is a bottom-up design tool. You remove accidental complexity and uncover higher order organization in your code, occasionally uncovering or allowing new functionality that was intractable before.

A “large refactor” is a top-down change. Major surgery. You’ve used (maybe misappropriated) the tools of refactoring for a different purpose which is antagonistic to the school of thought that enumerated these tools.

I say this as someone who when I was starting out spent a holiday weekend rewriting hundreds of lines of repetitive code (a bad idiom) to move an O(n²) performance issue into a single function. I removed 500 lines of code in the process. At that point, and for years after, my most productive week.

Enter the Mikado method. With Mikado you start to do your large refactoring. You change A, which forces you to also change B. You change B, but now C has to change too. So you change C and, you guessed it, now D doesn’t work. Man this is getting out of control. D leads to E, and E leads to F.

Now you’re looking at making change F. You are worried, or rather, you should be. How are you going to verify all these changes are complete? How do you know here isn’t a G, H and I waiting for you after F?

Now the hardest thing for any developer to do (besides admitting they gave bad advice) needs to happen. You need to throw away code you just spent hours of time sweating over. Revert to master, and start over.

Just implement change F, by itself. Sure enough, F leads to G and H, but those three changes are still reasonably sized for a code review. Send it off and start working backward up the list.

The reason we do bottom up is that we uncover the difference between our intuition about the code and the situation on the ground. When you go to do E or D you might find you didn’t need to make the change quite as broadly as you thought. And you might be able to skip A and B entirely.

When I do Mikado, I find that only about 60-70% of the code I originally wrote survives the process. The changes are more contained, which means they impact fewer of your coworkers less often. They are easier to reason about, even self-explanatory. Of course you would do it this way, it just makes sense.

The interesting thing to me about Mikado is that some people independently discover the trick themselves. In fact these large refactors I had to do, I complained, reminded me of playing pick-up sticks. I can’t move this because of that. Can’t move that because of the other thing. I got overwhelmed by the size of a couple of these changes, and threw them out. Then started over either immediately or at a later date.


This is also why I'm super into rewriting local branch history - when you get into these situations, you reorder the changes into dependency order (F -> E -> D -> ...) and then it becomes trivial to create a PR for a suitably small prefix of the dependency path.


That helps but I really believe that it’s an important part of the Mikado Mehod that some of your changes go away during the process. It took a long time for me to believe that some changes are just distractions and don’t move the code forward in all of the ways that are important.

Because part of refactoring is realizing that certain problems can be fixed at any time you deem it necessary to do so, that means you don’t have to fix it just because you can. A lot of my code ends up prepped for a refactor I never do. Someone else does it for me, when they have a requirement that makes the change make sense. It’s my 90-10 compromise for the tension between YAGNI and broken windows. I don’t make the change, I just make the change easy.


Smaller changes tend to be safer and should break less. The less you change at one time, the better and less bugs which are easier to spot and fix.

But if safety does not matter, or its pure crap anyway. Then refactor away.

Proper review of core components that are used and proven is hard already, refactoring such components - don't do it if avoidable - there must be a very good reason. And convenience ain't one - you'll probably get some duplication of code and double maintainance during the transition period.

Making small PR is just a way to be safe and smart, one branch with individual gradual commits that could be used separately is also a way (if they could be reviewed separatley).


In small, easy to understand steps. It helps to do moves and renamed individually because they are mechanical in nature and easy to review.


I'm not sure why so many people consider "refactoring" to be a synonym for variable renaming. Refactoring is a non-functional change which makes the code base more manageable [1] . Renaming is a kind of refactoring, but not a terribly important type of refactoring.

[1] https://en.wikipedia.org/wiki/Code_refactoring


That is true, but it does not follow that it is possible to have large refactorings that cannot be accomplished in a series of smaller, independently-verifiable steps, which is what your original question implies.

Maybe that would occur if you had to replace a central algorithm with a significantly different one (because the original one did not scale, for example). That is more of a rewrite than a refactor, and should be done as a separate product, developed incrementally, of course.


My personal methodology is to alternate between iterative development and large refactors. I do very little testing during the large refactor, and test as thoroughly as possible afterwards.


Why would incremental development be unsuitable for refactoring, specifically? It is pretty much universally accepted that incremental methods are the best way to tackle software development in general.

If 'refactoring' did only refer to simple, mechanical changes like renaming, I could see it, but you have just argued against that interpretation.


There is a fallacy which says "if I understand the steps which got me here, than I understand where I am." Or "If I understand each change that has been made to a system, then I understand the system." This is simply not true. A myopic understanding of each change does not equal an understanding of the system. Human memory is simply not good enough to remember all of the previous bits of code, and comprehend whether a change won't happen to interact in a bad way.

With lots of testing, yes, you can incrementally build a working system that you don't understand. But by myopically staring at diffs all day, you'll only get farther from the truth of how your system works.

The major refactors that I do, are because I've come to the realization that my data model was not correct. A realization, that often happens multiple times during a project. My utility functions are still useful, but I have to rewire everything. There is no meaningful way to do that incrementally. Why would you even try to do that incrementally? It's like, if you have a salamander and you want to turn it into a horse incrementally. Of course, most of the RNA is the same. But testing and code reviewing the steps between salamander and horse seems to me like a waste of time.


Why does that not apply to development in general? Or perhaps you find it does?

While "if I understand the steps which got me here, than I understand where I am" may not be guaranteed (especially if you don't know where you started from), it is almost always among the more effective ways to proceed (though it is not particularly helpful if you don't know where you are going.) It can help avoid getting lost, which is one of the reasons for doing development incrementally.

There is no way in which I can see that turning a salamander into a horse has anything to do with refactoring.


In dynamic languages such as JavaScript or Python, renaming is definitely a kind of refactoring, as it can easily break things (I've seen it happen).

In static languages such as Java, renaming a variable is something your IDE can do for you with all the benefits of static analysis. There's almost no risk associated, and reviews can be a rubber-stamp.


Maybe we need something better than Git where a refactor of one variable does not give a crap on the entire repository but rather is listed as a single "change".


You don't. Your software doesn't ever need big changes because they're basically impossible to review. All changes must be small and trivially reviewable.

/s


If I disregard "/s", I actually agree - with caveat that large refactoring is still ok if you treat the refactored code as a totally new project - not battle tested that is. So you absolutely can do large refactorings, but the resulting code should be thoroughly tested before deployment.


For a change of this magnitude you would expect some tests that actually verify these assumptions? Or am I missing something in this PR? [1]

[1] https://github.com/btc1/bitcoin/pull/11/files


The 2x fork never really had any legitimacy. The New York agreement was supposedly signed by only a small number of people, which is not the way Bitcoin is governed. The resistance from the community and subsequent fallout shows the resilience of Bitcoin itself, and how well it resists manipulation and FUD.


Well the biggest snag in the plan was that the hardcore big blockers had already moved over to Bitcoin Cash, so we lost our biggest support group early on.

Now, after segwit2X has failed and proved that it isn't likely to succeed any time in the next X years, well us segwit2Xer moderates have joined the bitcoin cash people (as well as some have moved to ethereum).

That community is much stronger and we are able to survive because it was a hardfork.


Why not go to LTC? (Seems) like a friendlier community.


Lite coin?

Lite coin has zero community of note. It is mostly just speculators, and people who suck up to the bitcoin Core team.

There frankly isn't even a litecoin vision. Like what do litecoin supporters even believe? What even IS a litecoin supporter?

I go to a bunch in person crytocurrencies and talk to people. And I can't think of a single time someone has said "oh, I am really excited about LTC" or similar.

Nobody is going out to coffee shops and saying "hey, do you accept litecoin?". Nobody is hosting litecoin meetups or tech talks or anything at all.

My biggest problem with litecoin, though, is that it is an unthreatening, PC, coin that has no intention of shaking the status quo, pushing the needle, or doing anything groundbreaking that might upset the powers that be in the cryto space.

If you aren't making anyone mad, then you probably aren't doing anything interesting or important. And I can tell you that nobody is 'mad' at litecoin.


All it took was waiting over an hour with a large fee to make me realize there’s a huge problem in the bitcoin community.

The block size is fundamentally too small to support the current volume of transactions in a meaningful way, and the weak (and suspicious!) arguments in favor of letting bitcoin’s transaction time spiral into hell don’t hold water.


They should start using valgrind! I've just launched few bin files and got a lot of stuff like:

"Use of uninitialised value of size 8"

"Conditional jump or move depends on uninitialised value(s)"

"Syscall param writev(vector[...]) points to uninitialised byte(s)"

Also It doesn't free some memory. Valgrind is not silver bullet, but it helps alot. Bitcoin Market capitalization is around $140 billions and these kind of bugs should not be there.


Bitcoin developers do use valgrind. I don't know about B2X, but the Bitcoin Core guys do.


That kind of thing is common when you use openssl < 1.1.0 without building it with -DPURIFY because it uses some uninitialized memory with its PRNG and this then "infects" other memory reads/writes and branches for valgrind.

https://github.com/openssl/openssl/commit/d8ca44ba4158a9dafe...


This isn't bitcoin you're looking at, this is an attack on bitcoin.


Why wasn't this caught in testnet? Did they not do any tests?


Their testnet didn't fork properly either, but for some other reason I believe. It was thought to be because they had more mining power than anticipated in their testnet, which was discussed on the mailing list as an attack on testing.

One reason why testing wasn't thorough enough was that it was under specified. There was no public discussion on how this was supposed to work. The specification was whatever the maintainer decided to merge, which changed several times during the software's lifetime.

Most further discussion or counter proposals, including perhaps the most controversial one which was replay protection, were met with "that wasn't what the signees agreed to" and that anything going beyond that was off the table. But what the signees agreed to was always very unclear, as the agreement only described that a "fork" was to be activated and that something was supposed to be 2 MB in this fork. What was supposed to be 2 MB was not specified, let alone how deployment was supposed to work, and neither the protocol for signalling, fork activation and block weight. And the latter things are probably what reviewers would have concerned themselves with had this ever been a real proposal.


What "2MB was supposed to be" was absolutely obvious to everyone who actually signed the document.

It was supposed to be twice as much as segwitz done via a base blocksize hardfork (thus, the name segwit2X).

The only people who were "confused" about what it was were the smaller block trolls who didn't even sign or support the document in the first place, therefore their opinion doesn't matter. The only opinions that mattered were of those who actually supported the NYA and segwit2X.

Some other details, such as replay protection, we're obvious sabotage efforts that were also proposed by the small blocker trolls. None of the NYA people were arguing for that, and once again, they were the only opinions that mattered.

The other details, such as deployment date, signaling method, ect were not specified, though, and I agree that this came back to bite everyone in the ass, as proven by the fact that it failed.


No, far from it, as evidenced by the opinions of the undersigners on where to take the project from the start. And while labelling dissenting opinions trolls may make for convenient discourse, it does not help with understanding the underlying engineering.

Note that the most common reason among the former undersigners for withdrawing support was the lack of replay protection (which only underscores the lack of consensus around some fundamental design decisions among the signers).

To anyone familiar with post segregated witness Bitcoin an increase in base block size could be done in a number of ways. The weights for data structures could be changed, but so could the weight limit. What would be reasonable if backwards compatibility is no concern is use the same multiplier for all types of signatures, thereby removing the so called segwit discount, but this turned out not to be the design chosen.

Like most people interested in Bitcoin, I only followed the NYA at a distance and have no more information than anyone else in this space, but I do know some people working with a company that took part of the agreement. I do not think there was some secret cabal behind the agreement, which some people seems to believe, but I also think that there were conflicting goals involved.


> Only one person seems to have approved the changes (opetruzel)

... and that's opetruzel's (almost) only contribution to anything in github in his/her two year history.

Isn't that odd?


Wow, good catch. There are plenty of malicious attackers in the cryptocurrency space, so you should definitely take reviews from not-yet-established contributors with a huge grain of salt. A review of this magnitude should have required review from several well-known contributors.

We use a better pattern at work. On our project, we require approval from members within the team for a commit to go in. If there's one person with the best experience in any given area, or even just a strong opinion, then their approval is required as well. It's a big anti-pattern that I catch myself thinking about occasionally. It's temping to send out a commit for review by more junior/inexperienced code members, bypassing the more experienced reviewer who you know will have lots of input you might not necessarily want. That's what Jeff Garzik did here; by the end of it he didn't actually want a review, just a rubber stamp, and so any stamp would do. Thus the entire point of doing reviews was bypassed. It's particularly egregious that he never even bothered to write tests as requested by another reviewer.


From the article (and my main reason for posting):

“Reviewing and testing consensus changes is really, really hard. [...] Essentially, even one or two weak reviews in a chain of reviews can break the entire consensus system with a catastrophic bug.”


Amusing. My post makes it to the top, but my comment explaining why I posted gets downvoted. :)


Not surprising! You didn't elaborate on it, you just quoted it. People already read the article because you posted it. Rereading your favorite sentence isn't particularly helpful at that point.


Code review is hard to do right.

Use multiple reviewers for critical code (treat it like proof reading an email you are about to send to millions of customers).

Write automated tests if you can, including for regressions and edge cases, but also to assert your code works the way you’re saying it will.

Finally, check the code out locally if you’re a reviewer and verify the changes yourself, don’t just read the diffs.


If you want to hear this article read out loud, this podcast has this article along with some commentary:

https://soundcloud.com/noded-bitcoin-podcast/noded-020


Fascinating. I am not super familiar with the verification of consensus systems like this one, but have done some work testing resilience and consistency guarantees of distributed data stores. Would something like Jepsen [1] be useful here? Could it be adapted to verify contracts like those of BTC with similar modifications to adapting it to verify, say, a new distributed database?

[1] https://github.com/jepsen-io/jepsen


And this will be cited for years as a case in point for why finanvial institutions should be skeptical of treating crypto currencies as a true piece of payment or currency infrastructure.

At the current level of use, this issue is a side note. At the envisioned level of use by crypto promoters, it woukd be a catastrophic economy-crypaling disaster that would have impacts far beyond a single institution. As infrastructure, this sort of thing is must-work cannot-fail technology on the order of nuclear reactors.

I'm not saying crypto currencies aren't in our future at that scale. I don't know if they are or not. I'm saying we have a much, much longer to go, and the order of a decade, before they have time to mature and then prove that maturity. Until then, they're a speculator side show. Heck, with so much mining capacity in China, they exist in significant part on the suffrance of the Chinese government not feeling too threating by the level of social disruption they might cause. If that threshold is crossed, significant mining capacity an transaction speed that goes with it is a few great-firewall configs away from oblivion. Yeah, miming difficulty will kick in ti alleviate pressure, but only after disaster-level and life-savings-wipeout levels of financial fallout ensue.


This is a bug in a fork of bitcoin.

It shows how bad things can be if you don't use development practices that are appropriate for the project, but it doesn't show anything at all about how actual bitcoin code is developed. This is about a fork (that never ended up happening).


actual bitcoin is based on network consensus. And Segwit2x had 80-95% consensus.


No it didn't. It had 80-95% consensus among miners, maybe, but that's not the same.


There's a little room for difference, but if something stays at 95% of miner support then everyone else is almost certainly going to be dragged along for the ride.


I think the opposite is true: if 95% of the community wants to use something, miners will be dragged along as mining anything else would be unprofitable.


There is no way to determine what 95% of the community want for the future of Bitcoin. There is no community voting in Bitcoin. Only miner signalling. People can stop using Bitcoin after a change but there is no way to get their opinion in advance.


> There is no community voting in Bitcoin. Only miner signalling.

no, there is the open market. 2x futures traded well below btc's value.


there seems to be a lot of room for difference, since the segwit2x cancellation notice explicit states that consensus was weak (certainly not 95%):

from https://lists.linuxfoundation.org/pipermail/bitcoin-segwit2x... :

"Unfortunately, it is clear that we have not built sufficient consensus for a clean blocksize upgrade at this time."


That comment is more about a theoretical scenario that actually has solid consensus.

The real problem with segwit2x is that the consensus was dropping and was trending toward a very weak level. It had strong consensus for long enough to do a switchover, as far as I understand, but it didn't take the opportunity and then people eventually started to change their minds.


Not at all. Miners choose to mine the cryptocurrency that is most popular; they don't choose which cryptocurrency will be the most popular. They are at the end of the chain of causation. Hashpower follows price.

If 95% of miners wanted something, but nobody else did, then every single block they mined would be ignored as invalid by the rest of the users in the ecosystem. They'd be foregoing huge profits by the minute, and inevitably would start defecting in ever-increasing numbers.

The only way a hard-fork ever has a prayer of taking over the main chain is if it gets implemented in the most popular software programs used on the network (e.g. Bitcoin Core). This is what killed Segwit2X, despite seemingly having massive miner support. Note that the miners immediately capitulated; such easy capitulation does not lend much weight to the importance of the 90% figure. They're full of hot air.


These bugs can be tested. You launch a testnet, and test them.

There was no reason why one wouldn't run a testnet and see how fork will go after block XYZ.

I'd rather have a global ledger with consensus than allow a few companies to stop all banking at weekend so all banks arrive at a consensus.


This repo is the financial institutions and "CEOs" pushing for an arbitrary change over the objections of the engineers, and in many cases their very own CTOs, and the travesty of how terribly poorly it was written and non-tested shows that bitcoin should never again entertain this kind of nonsense.

This was never about scalability-- the actual on chain capacity more than quadrupled in August and Lightning ads a nearly infinite increase in theoretical capacity.

This was always about the self important CEOs wanting to take over the project and force the engineers into their will.


There's a lot of money being put into startups for bitcoin based businesses. Perhaps this may incentivise some of the larger ones to hire a few staff to do reviews. It would be embarrassing to have their investment go up in a puff of smoke.


When you see one cockroach there is probably more


"The pull request has 221 comments, most of which are arguing over the definition of 2MB blocks. "

For me, as a young adult, the most important part of growing up has been learning to not engage in bike shedding. It is hard not to bike shed. When design questions come up, and I see one thing as being WRONG, it is really hard not to become myopic about it. But this is a great example of how bike shedding can be harmful.


It obviously depends on who you ask, this is an incredibly politicized issue, but core developers would argue blocksize is not bike shedding, but rather an important defense against centralization and censorship. I think the stat I heard is that at 1mb, and with some interesting distribution topology, the current speed of block propagation across the majority of the network is 250ms. So, this is obviously where things get tricky, one person’s bike shedding is another’s critical requirement.


And where did this 250ms come from? Seems arbitrary.


I believe it came from this talk:

https://www.youtube.com/watch?v=nSRoEeqYtJA

And I think this is what Greg's referring to:

http://bitcoinfibre.org/


So all scaling progress is stopped because of a youtube video? Yikes.

There should be a law of bike shedding, where you don't realize you're bike shedding because omg it's the most important thing in the world.


Remember at this time the block size has already been increased up to a maximum of 3.7mb physical blocks. This is a large part of why there's discussion about what 2X means-- as commonly people think that the bitcoin block size is only 1mb but it's a 1mb "weight".

The 2x project would have doubled this to a 2mb weight but a 7.4mb physical block theoretical max.


While your observation about bike shedding is a timeless one, what this tells us is how important it is to share a common goal. If you disagree what it is you want to accomplish, the discussions around the definitions and terms with which the project leader described the goal will be endless. Those discussions are not bike shedding, those are the basics of getting everyone involved on to the same page.


There is a test in the BTC1 repository to play with different blockchain scenarios here: https://github.com/btc1/bitcoin/blob/segwit2x/qa/rpc-tests/b...


The way I always understood bikeshedding is that people who fell behind on their ability to meaningfully contribute to the project try to stay relevant at picking a controversial topic that they can build a ground to stand on around. The alternative is to fall into irrelevancy and lose status.


>...bike shedding

For those unaware, like me, this refers to the Law of Triviality.

https://en.wikipedia.org/wiki/Law_of_triviality


Time for Hashgraph: https://hashgraph.com/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: