I know I'm this situation the dev made a simple mistake, but I wonder if Ethereum is safe as a system assuming developers will make many mistakes, and those mistakes will happen more often as more complicated contracts are written.
At some point I wonder if the idea of smart contracts is better than a traditional system, since they seem prone to developer error due to complexity.
> I know I'm this situation the dev made a simple mistake, but I wonder if Ethereum is safe as a system assuming developers will make many mistakes, and those mistakes will happen more often as more complicated contracts are written.
This will probably come off as being assholish, but I honestly don't mean it to be: you should read the rest of the article. It actually covers this topic in depth and has good thoughts on this fundamental issue with smart contracts.
- Humans make mistakes, the tools should have been better
- The technology to make Ethereum safe doesn't really exist
- We should work to make that technology exist
- We should be happy this happened since it raises awareness about the problem
> In the end, attacks like this are good for the community. They call you to your senses and force you to keep your eyes open.
What?!
Listen, just because Valgrind exists doesn't mean you can't accidentally write a C program with memory errors. Just because Rust protects against race conditions doesn't mean you can't have bugs in your threading code. We're not even close to a world where tools can offer amazing protection.
> The developer here was Gavin Wood, one of the co-creators of Ethereum, and the inventor of Solidity, the smart contract programming language. The code was also reviewed by other Parity contributors. This is basically the highest standard of programming that exists in the Ethereum ecosystem.
This was an error made by the best of the best, reviewed by the best of the best. The tools that will save us from this madness don't exist. What does this mean for Joe Developer? It means he's not going to build something that he keeps anything of value on. The author's cheery "we'll be stronger for this!" attitude completely neglects the obvious chilling effect that this is having. If the guy that invented the damn thing can't protect his $31 million dollars, why would I even try to build something that could manage even hundreds of dollars?
And to that end, is it really better than a traditional system?
When I was in physics undergrad, I remember talking to engineering students taking statics. In general physics, we were allowed to make simple numerical errors, the emphasis then was on figuring out the solution and understanding the physics. My friends on the other hand complained how they, after acing gen. physics, would get mercilessly docked off points for minor arithmetic errors and calculation errors in statics.
They'd complain to the professor and he'd reply, "that mistake you made means your bridge will collapse! It doesn't matter if it's minor!" The point is to become an certified engineer, there is a certain level of rigor you need to have because of the gravity of what you deal with, and if that fails, engineers have been held legally liable.
The article said it itself, and I really couldn't believe how blantant it was:
>Most of the programmers who are getting into this space, myself included, come from a web development background, and the blockchain toolchain is designed to be familiar for web developers.[...]The problem is, blockchain programming is fundamentally different from web development.
My mouth was wide open reading this here, people who have the discipline of web developers are the ones developing Ethereum? And this is the promised land that one day will be the future and I'm supposed to believe it?
Sorry, for some aspects of life, brightness and having a great community are not enough, especially when you want to do something that you could lead to risk of loss of life or limb.
That means there is an entrepreneurial prospect for you to enter the space with a team that has a more serious background, right? Eventually people will generally understand that contracts require serious correct engineering, and you'll be able to capitalize on expertise.
Smart contracts have only been generally available for like a couple of years. Most people are clueless about how to do them properly. That means it's an interesting thing to work on! And hopefully we'll steer the ship towards safety, correctness, and simplicity.
>Eventually people will generally understand that contracts require serious correct engineering, and you'll be able to capitalize on expertise.
No one gets paid for exploiting a minor hole in a bridge, causing it to collapse. They will by finding holes in smart contracts, as evidenced.
Bug bounty programs do not exist to provide alternative means of financial remuneration for black hat hackers. They exist to provide white/grey hat hackers an opportunity to get paid for doing the right thing. And yet black hat hackers still sell zero-days to rogue governments and operatives.
Why do you think this bug bounty model of securing smart contracts will be any different? Capitalizing on expertise against smart contracts has its own reward with a shorter feedback cycle than selling zero-days: You directly profit from finding security breaches and stealing money.
I think we'll be able to just prove the correctness of our contracts and then use them safely. Of course you can't do that with Microsoft Windows because it's too complicated, but you can do it with small contract bytecodes.
The kind of person who's risk-averse enough that they'd pay extra for a better contract engineer is probably not the kind of person who's using smart contracts at all. And buying contracts is lemon market, which is only really solvable via regulation - but its biggest fans are regulation-averse people.
I don't understand this comparison. The underlying concepts involved in flying a plane is way more understood, tested, and regulated than writing software is. If the choice was between dying in a plane crash, and dying due to a software bug/exploit, I'd choose the plane every time.
Traditional financial systems have established administrative body and practices, risk monitoring systems, fraud insurance and "slowness" that actually helps in cases as these (eg. You very often can undo transactions, by freezing funds somewhere across the world and then resolve those issues later on).
Imo cases like these will be crucial in adoption of crypto currencies and smart contracts. And if there will be no (administrative) safe guards against cases like these, it might as well be the end for crypto currencies, simply because of lack of general trust in them.
Hmm; my concern is that it results in a polarisation of the skill levels of the developers writing smart contracts.
There seems to be a lot of hype and demand around blockchain tech at the moment, and hence a lot of demand for developers to pick up the tech. For the skilled and experienced developers who work with blockchain tech these kinds of events will make them think more about what they're doing and probably take more care. For those of us with lots of programming experience but no knowledge of blockchain tech, they serve as a warning that there are serious issues with the platform that need deep experience (and hence time investment) to address properly.
That leaves a tranche of less experienced developers who maybe don't realise that these issues mean they should take more care (or use better tooling, or whatever other panacea is called for to make development 'safe'). The platform may finish up with a small number of experienced developers writing 'safe' code and a larger number of inexperienced developers writing 'unsafe' code, with the result that the platform as a whole finishes up being unsafe. As the article says:
> Most of the programmers who are getting into this space, myself included, come from a web development background, and the blockchain toolchain is designed to be familiar for web developers. [...] In a way, this may end up being its downfall.
> The problem is, blockchain programming is fundamentally different from web development.
> We're not even close to a world where tools can offer amazing protection.
Actually, we're reasonably close--the tools aren't quite there yet for mass consumption (many are still feel quite researchy), but given that the trend of (research -> industry) usually takes 10-25 years, I'd expect that more and more critical systems will be formally verified in 10 years. Even now, companies like Amazon are using some formal methods (modelling with TLA+) to validate that specifications will behave as expected. (https://cacm.acm.org/magazines/2015/4/184701-how-amazon-web-...)
Formal methods & dependent types allow for some very cool tricks on top of that--basically, you can encode in the type system a proof that the program implementation matches a specification. CompCert is a mostly-formally verified C compiler--since it was released, iirc, no bugs have been found in the verified portion of the compiler. (CompCert page: http://compcert.inria.fr/man/manual001.html)
You can also prove that the specification has particular properties (in a distributed system, things like liveness and partition tolerance). Consider the Verdi framework (allows formal verification of distributed protocols) and their formal verification of RAFT. (Code here: https://github.com/uwplse/verdi-raft)
However--it's just that it currently takes a lot more work in terms of person-hours to do the development. But formal methods are getting used in more and more places, and they do make a difference in practice. (Wired has a not particularly deep, but straightforward article that shows another use: https://www.wired.com/2016/09/computer-scientists-close-perf... )
Anecdotally, as far as effort--I'm an industry programmer who writes mostly C#. I had to learn Coq (a formally verified language) for a class--it took me a couple of simple assignments to get the idea of how it worked. Even after a few months, hacking together a formally verified interpreter for a very simple language (functions, while loops, etc.--simple, but not trivial) took about 3-4 times longer than it would have taken me to do normally.
I think the problem is that the language for smart contracts that backs Ether hasn't been written with formal verification in mind. You could apply formal methods to anything after the fact but it doesn't look like they've made this easy here. You could claim that a C program could be made bug free eventually by applying formal methods for example but it would require a huge amount of work.
Is there a good reason they didn't use a battle tested pure functional language with a strong and expressive type system?
I'd agree, and the article basically says the same thing--the language is too expressive, and therefore hard to analyze. I don't know that much about the Ether devs, but I'd expect it just didn't occur to them--even among PL folks, knowledge & experience with something like coq is somewhat rare unless you went to the right undergrad or grad school. Especially if you weren't keeping up with the latest research in the last ~5-10 years or so.
Something like Coq with a few primitives to represent interacting with the Ether network, and an optimizing, verified cross compiler would have been a perfect fit for this sort of thing, in my opinion. It's a shame that they didn't go that route, the extra dev time to get it right probably wouldn't have been equivalent to 200+ dev-years of cost (~31 million).
> I don't know that much about the Ether devs, but I'd expect it just didn't occur to them--even among PL folks, knowledge & experience with something like coq is somewhat rare unless you went to the right undergrad or grad school. Especially if you weren't keeping up with the latest research in the last ~5-10 years or so.
Hmm, you'd think they'd know about languages like Haskell though.
I don't think it would be practical to expect developers to write verified code anyway; it's still far too challenging in general. Having a language where you could optionally formally verify the code would have been useful though.
> Something like Coq with a few primitives to represent interacting with the Ether network, and an optimizing, verified cross compiler would have been a perfect fit for this sort of thing, in my opinion
You wouldn't think something like Coq with a few primitives would make some contracts difficult to write?
Late response, but: coq has the optional verification that you're proposing. :)
In my experience, Coq is not significantly harder than OCaml to write unverified code in. It's missing some nice shorthand syntax, but other than that it feels pretty similar.
With coq, the main challenge is expressing things that are meant to "run forever" (any truly unbounded recursion)--but my understanding is that would be forced to halt when it ran out of ether (so it's bounded anyways, and if you needed to you could probably explicitly hand that fact to the compiler)--and I think you probably don't want something to loop indefinitely if each iteration costs ether.
The main issues with coq that I think you'd crop up against are things that a good optimizer/compiler could probably help with (e.x. things like the fact that integers are naively represented as cons cells, so addition is O(n) and multiplication is worse). However, I haven't written any smart contracts, so it's definitely possible I'm missing something obvious.
Honestly I read the comments first, I enjoy and learn from HN comments often more than the original article. Thanks for pointing out I was missing context.
>At some point I wonder if the idea of smart contracts is better than a traditional system, since they seem prone to developer error due to complexity.
The idea is incredible. Groundbreaking. Honestly super interesting to me.
The execution of them as law is... well, going about as expected. This is why the law doesn't actually work this way.
I think Solidity, and hence Ethereum as a smart contracts platform, really took off because Solidity is easy to get started with. More formal languages make it harder to shoot yourself in the foot, but ... devs don't use them.
The trouble with appealing to the middling devs, of course, is that you end up with middling code, and middling smart contract code is how to shoot yourself in the foot.
One of the key and powerful features of the Ethereum Virtual Machine or, EVM, is the ability to delegate execution to external libraries. You can think of this much the same way you think of installing 3rd party libraries in your favorite programming language of choice.
In the EVM you can write a "library" which performs some common functionality such as manipulation of date-time objects much like the `datetime` library in Python. Any other contract may then make use of this code simply by delegating execution to the deployed library address. This has inherent risks that every Ethereum developer should understand fully, but with that risk comes some incredible power and potential.
There currently is no "Standard Library" for the EVM but it is looking very likely that it will be comprised of this type of contract. Slowly, overtime, these library contracts will be written and deployed to the network. There is work being done on using theorem solvers to mathematically prove that a contract satisfies certain properties which opens the door to a "provably correct" standard library.
I know of no other computing environment or packaging system that has these properties. For me, it has been an enlightening subject and I feel like we've only just scratched the surface.
A serious question: are the benefits really worth it? How much overhead would it be to just include the library code directly into the contracts instead (so that the whole thing can be verified as a single black box)?
I think the answer to that question is extremely contextual and going to be very different for different applications. I also think that
I'd like to point out that the idea of including the library code within the contract is untennable at a certain level as contracts currently have an upper limit on size which is determined by the block gas limit. It also doesn't make it any safer because it's still functionally the same as executing external code because it's the same code being executed.
Its not really worth it. Very over complicated. Really they just need to do code/data deduplication on chain somehow and forget about using contracts as libraries in this way.
I mean.... I guess. It's a feature of Ethereum, if we're going to weasel around.