Hacker News new | past | comments | ask | show | jobs | submit login

The Kelly criterion is to financial math as the Fibonacci sequence is to mathematics. Yes it's neat, no it's not special, please stop bringing it up all the time.



When I was a young guy who happened to be partner in a hedge fund, I used to ask candidates about this.

I kinda regret it now. I'm not sure what I was trying to find out from people. I guess in some way it's a cultural question masquerading as a technical question, because it reveals whether you've heard of it and whether you have heard of the standard stuff that's said about it:

- Don't do full Kelly because if you're on the wrong side (too big bets) that's definitely bad.

- Depends on the probabilities being static.

- You can find a continuous form of it. You can also find the implied leverage from the Sharpe ratio.

I wonder if my memory is even correct on these. But the point remains that it's not terribly useful as a thing to evaluate people with. I guess the question "How do you size your positions?" needs to start somewhere.


> I guess the question "How do you size your positions?" needs to start somewhere.

Maybe the question itself is a good place to start. If someone asked me about Kelly, my mind would immediately drift from "statistical models and dynamic programming" to something else.

The real engineering problem is threefold: (1) how do I model my return generating process, (2) what is my utility function, and (3) do the first two steps yield a tractable Bellman equation?

If the problem is framed right, most people will have something to say about 1 and 2. The third part is tricky. A real world solution will involve finite element methods. But in an interview, people may be hesitant to bring up approaches that don't yield closed form solutions.

That's what's special about Kelly. It's assumptions for steps 1 and 2 that produce a closed form solution in step 3.


This so much. Like anything that is 'optimal' it is optimal with respect to some criterion. For the Kelly Criterion it is to maximize the logarithm of the weighted sum of the expected value across all outcomes.

This is probably not what you actually want in almost any situation.

The one time it actually makes sense to bring it in is if you are forced to make a certain number of wagers in some game AND you have good-enough knowledge of the odds of the game AND your payout is proportional to how much money you have left at the end of the game, AND the wager size does not effect the outcome of the game in any way. This scenario never happens.

Even when you meet many of the necessary prerequisites to use Kelly, it still doesn't make sense at all. For example, Blackjack tournaments. You are given a set amount of starting money, you play for a set number of hands (or time), and you know the odds perfectly. However, the payout structure isn't proportional to your final amount of money, so using Kelly has more or less no chance of winning any money in the tournament. They usually pay for the top N results, which means you have to go with a very high variance strategy AND win consistently to place.

Poker: Nope, not even close, bet size is a direct input into the dynamics of the game. Predictable betting of any kind is a maximally bad strategy.

Stock Market: utility of money isn't logarithmic, so it is not worth maximizing, even if you knew probabilities and were forced to make wagers, which you don't and aren't. If you could even approximate probabilities you could use that power to basically print money on derivatives, so even if Kelly applied, the prerequisites are too strong and would make far far better strategies available.

The only valid use case for bringing in the Kelly Criteria is for gamblers to feel better about burning money at the tables by improperly applying it.


The first sentence of your post, while technically true, misses the point. This misunderstanding undermines many of your other points.

The kelly criterion happens to be optimal with respect to log wealth but that's not the main reason why it's interesting. Many explanations, including the original post, make this mistake. Maybe because 'maximizing expected utility' is a more common idea.

The first sentence of the wikipedia article:

"In probability theory and intertemporal portfolio choice, the Kelly criterion (or Kelly strategy or Kelly bet), also known as the scientific gambling method, is a formula for bet sizing that leads almost surely (under the assumption of known expected returns) to higher wealth compared to any other strategy in the long run".

In other words, pick a strategy. I'll pick the kelly strategy. There will be some point in time, after which I will have more money than you, and you will never overtake me. No logarithms involved. This is something you can easily check by simulation, but requires some heavier math to formulate precisely and prove.

See also posts by spekcular and rssoconnor elsewhere in this thread.


If it's a mistake, it's a charitable one. Picking log utility is at least somewhat principled, trying to lead almost surely to higher wealth is way more arbitrary.


What exactly do you find arbitrary about this? When gambling or investing, more wealth is better than less wealth.

'Almost surely' is a technical term which means 'with probability 1'.


At no point in time does Kelly give you more wealth with probability 1, and there is no reason to care about "more wealth with high probability", that's not even a transitive comparison function.


> At no point in time does Kelly give you more wealth with probability 1,

Either you're claiming that the theorem I attempted to describe is false, or you're misunderstanding the theorem I am trying to describe.

I never wrote anything about 'high probability' either so I don't know why you introduced that notion.


> For the Kelly Criterion it is to maximize the logarithm of the weighted sum of the expected value across all outcomes.

It's actually to maximise the expected logarithm of the monetary amount, and it's a pretty good heuristic (in most circumstances) given that most opportunities are exponential. (It usually takes a given amount of effort to double your money.)


Can you or anyone expand on "utility of money isn't logarithmic"? It seems a whole bunch of economic theory is based on the wish ("because it has nice properties") that losing 10% of your money generally hurts the same amount no matter how much money you have. Even accounting for a non-elastic living wage on the bottom. I know there's a lot of political opinion here, but has there ever been an effort to empirically determine what the utility of money actually is?


You obviously wouldn't use Kelly criterion during a poker game because the assumptions don't fit. But on a larger scale it can be used for 'bankroll management' - what proportion of your wealth should you use on a tournament entry fee. Of course you don't have the exact parameter p but you can use an estimate to make sure you are not making a grossly over/under-sized bet.


No, you can't use it for bankroll management, because you can't estimate the probabilities necessary. A rule like "never bet more than 15% of your bankroll on one thing" would work just as well and it doesn't require you to do a bunch of math to get to the same answer.

Here, lets do some examples to see how dumb it is in practice:

Lets say I want to enter a tournament where I guess I have a 5% chance of winning, and I have $1000, here is how much the KC says I should be willing to pay to enter based on the payout odds:

  10:1  -> Don't enter (duh)
  15:1  -> Don't enter (duh again)
  19:1  (breakeven) -> $0.00  (duh)
  20:1 -> $2.50 
  30:1 -> $18.33
  100:1 -> $40.50have forced
  1000:1 -> $49.05
  10000:1 -> $49.95
  10000000000000000000000000000000000:1  -> $50.00
oh, so this fantastic system tells me to never bet more than 5% of my money if I have a 5% chance of winning. So insightful!

Ok, lets say we have a 95% chance of winning the tournament:

  1:1 -> $900
  2:1 -> $925
  50:1  -> $949
  1000000000000000:1 -> $950
So if I'm a sure thing, I should bet a bunch of money. Again, there's no way someone would do this without maximizing log expected value and doing a bunch of math.

Maybe it gets more interesting if it's around 50/50:

  1:1 -> $0 (ok, makes sense)
  2:1 -> $250
  3:1 -> $333
  4:1 -> $375
  100000000000:1  -> $500
Again, Kelly gives us terrible advice. If you have a trillion to one payout on a coin flip, you want to bet less, not more! Why would you risk half your money, and have a 1/4 chance of losing all your money, when you can bet 1 cent at a time and just wait to win one time so you can buy half of the stock market with your winnings?

So I still contend that whatever it is that Kelly maximizes, it's a dumb thing to maximize outside of contrived situations where you are forced to bet and know exact odds, and where the expected value is positive (if you have negative expected value you should never play, and Kelly tells you that).

Finally, it is very sensitive to your probability estimates. Going back to my first example, where you have a 5% chance of winning a tournament, lets fix the payout at 30:1 and look at what kelly tells us if the probability of winning isn't exactly what we thought it was:

  5% (same as first example): $18.33
  4%: $8.00
  3%: Don't enter
  6%: $28.67
  7%: $39.00
So if I have to guess my probability of winning the tournament within 1% of the actual probability or Kelly is going to tell me drastically wrong amounts. Nobody can set odds on something like a tournament precisely enough for this to be useful. Just like with stocks, if you have the ability to estimate probabilities so well that Kelly stops telling you to do the wrong thing, you can make far more money just directly using your magical probability estimating powers and betting on derivatives. If I could estimate my odds of winning the tournament to within 1%, I can just go to the sports book and bet on who is going to win on the tournament and make far more money than I would in the tournament itself. It's like a system to sell a cake for 15% more profits, and it starts with "first, use your laser vision to preheat the cake pan".


> Again, Kelly gives us terrible advice. If you have a trillion to one payout on a coin flip, you want to bet less, not more! Why would you risk half your money, and have a 1/4 chance of losing all your money, when you can bet 1 cent at a time and just wait to win one time so you can buy half of the stock market with your winnings?

So you're just going to throw away the criterion because you think the results are unintuitive? That's the argument you're making here.

To take your reasoning seriously, the reason why you might not want to bet 1 cent at a time is because the Kelly bet is guaranteed to eventually overtake your 1-cent-bet-strategy. Furthermore, it is completely incorrect to say that the Kelly bet has a 1/4 chance of losing all your money in the given situation. If you lose your first bet, the Kelly criterion tells you not to bet the whole house on the next bet.

Nothing you have written so far suggests that you actually understand the sense in which the Kelly criterion is optimal, which I attempted to explain in my other reply to you. You keep writing as though it only maximizes the expectation of log-utility. In fact it's not clear that you even understand what the Kelly criterion is telling you to do.


Kelly is not optimal if you can't estimate the probabilities with great precision, which you can't outside of contrived examples or casinos. In casinos you have negative expected value on every bet, and Kelly tells you to not play at all. Contrived examples don't matter.

You aren't addressing what I said, you are cherry picking things you find easy to rebut. You are right that I messed up and that you can't lose all your money with two 50% bets. However you ignore the stronger argument, which I opened with and repeatedly pointed out, which is that you can't estimate probabilities well enough to use it, and that if you can estimate probabilities well enough to benefit from Kelly, that ability to estimate probabilities itself almost always unlocks strategies that strictly dominate any benefit Kelly gives you. It's useless in practice.


I don't really care about winning any global arguments. I see bad logic and I try to point it out. If you didn't like that some of your points were easily rebutted then you shouldn't have written them. Leaving them unchallenged makes your position seem artificially strong.

I couldn't resist bringing up the poker bankroll example because I think your in-game-poker example was poorly chosen. To me, it looked like you came up with a situation where the criterion obviously had no hope of being applicable and then used it to argue that the criterion is useless. E.g. I could find a whole list of things for which calculus is not applicable, but that would not be a good argument for 'calculus is useless'. The example I gave is at least closer to the assumptions of the Kelly Criterion.

I think the main thing I wanted to do was to correct the misconception that Kelly is only maximizing expected log utility, because it is a shame if someone (including other readers) thinks that the Kelly Criterion is just a fancy name we gave for the argmax of E f(S) where f happens to be the logarithm.

After all this, you (and other readers) might still conclude that the criterion is useless. But the set of justifications, and maybe the certainty, in that position, should change.


You don't have a 1/4 chance of losing all your money if you repeatedly bet half of it on the trillion-to-one coin flip.


Hi there, you seem to know a good deal about the intricacies of where the Kelly Criterion lies within the range of possible options to calculate outcomes.

Could you point me to any good resource you know of to learn more about the range itself and other options?


Kelly must be the name of the wife in the WSB "wife's boyfriend" memes.

More seriously, great analysis. Couldn't have said it better myself.


I believe it's come into fashion again with the rise of legal sportsbook in the US (DraftKings, etc). Particulaly, with the fascination over long shot "parlays" or chain bets that can achieve very high payouts with very little outlays. Better odds than buying a lottery ticket by far, or so it is perceived.

I think you have to have a few jokers short of a full deck to get into sports gaming and expect any outcome other than ruin. But the parlays are interesting. And the simplicity one could devise a winning strategy is enticing.

Consider a typical NBA season: 120 game nights, about 8 games per night. Let's say you constructed a parlay strategy in which you pick the under to hit on every game played that night. If the payout is large, say $1000 for a $3 bet. That's $360 for the season risk. And a high probability that eventually it'll cash ;)


If the odds of an under happening is 50% then that is indeed a great bet! The odds of winning would be (0.5)^8 or 1 in 256. So the EV of the bet would be

E(x) = (1/256 * 1000) - (255/256 * 3)

E(x) = .91796875

So you are expected to make nearly 92 cents on every 3 dollars bet. Over a 30% return! Any bookie offering these odds would quickly go broke.


In practice they'd simply flag you as a sharp and ban your account before you made them broke


The Fibonacci sequence is not special because it is just one element of the family of linear recurrences. Is there a larger family to which the Kelly criterion belongs? (Not being snide — I’m genuinely asking)


Yes, any utility function will give you a Kelly-like criterion. Kelly is log utility.


Kelly is special though, it's not just log utility. In fact, viewing it as maximizing log utility is ahistorical; the original derivation was in terms of information theory. (The paper's title is "A new interpretation of information rate" [0].) Further, and most importantly, Kelly betting has certain favorable asymptotic properties that betting strategies motivated by other utility functions don't have. See Breiman's "Optimal gambling systems for favorable games" [1].

This point is well known in the literature, for instance see [2]:

> Perhaps one reason is that maximizing E log S suggests that the investor has a logarithmic utility for money. However, the criticism of the choice of utility functions ignores the fact that maximizing E log S is a consequence of the goals represented by properties P1 and P2, and has nothing to do with utility theory.

[0] https://www.princeton.edu/~wbialek/rome/refs/kelly_56.pdf

[1] http://www-stat.wharton.upenn.edu/~steele/Resources/FTSResou...

[2] https://pubsonline.informs.org/doi/abs/10.1287/moor.5.2.161


Exactly. The Kelly strategy is the strategy that, with probability 1, eventually and permanently beats any other strategy. This isn't true for any other strategy and this criterion has nothing to do with utility.


Paul Samuelson wrote an article in 1979 on this. It contains only one-syllable words, except for the last word, which is "syllable". Title: "Why We Should Not Make Mean Log of Wealth Big though Years to Act Are Long."

His 1971 paper, "The 'Fallacy' of Maximizing the Geometric Mean in Long Sequences of Investing or Gambling" is more readable.


That’s wrong. The assumptions behind the Kelly criterion lead to the log utility only in a particular case.

What if each week you can bet $1 that in 50% of cases will triple your bet and in 50% of cases will give you $0? According to the log utility whether you should bet depends on your current wealth. According to the assumptions behind the Kelly criterion you should take the bet every week.


Technically the Kelly criterion suggests you should take this bet if your wealth is greater than $1.23, so it does depend on your current wealth.

However, this might still look dumb, and it's because my calculation assumes you could pick a smaller stake for the same bet, which isn't necessarily the case. If you can't, the calculation gets a little bit more complicated, but sure, the Kelly criterion can deal with that too.


According to the assumptions behind the Kelly criterion (maximizing long-term growth rate) you should always take the bet. Kelly criterion can’t be applied here because we have a different setup (you can’t choose your bet size), that’s my entire point!


the entire field of martingale pricing


I can hear your sigh in answering this all the way from here :D


Bloom filters are the equivalent in computer science. Just beyond the basics, but ubiquitous enough to be annoying when it’s on the front page every other week.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: