Hacker News new | past | comments | ask | show | jobs | submit login

Do you have experience interviewing candidates?

Many candidates will answer this question correctly and yet be totally unable to do anything when they're confronted with a non textbook case. To be clear the brain teasers I ask are mathematical problems, not the type of brain teasers used in consulting interviews. For instance:

We play a game where we each draw a secret random number uniformly between 0 and 1. We each may re-throw if dissatisfied with our first throw, or me may keep it. We do not know whether or not the other has chosen to re-throw. We then compare our results and he who holds largest number wins $1. What is the best strategy to follow?

That's the type of brainteaser I'd ask. It's accessible to a good high school student. I interviewed a PhD candidate in applied mathematics from a top Ivy league university who:

- wouldn't believe that maximizing the expected value of the number obtained wasn't optimal until shown an explicit counterexample

- was unable to write the equations properly or model the problem

- was unable to solve the equations after I handed them out to them

He was however able to talk about his thesis work. Your questions wouldn't have caught that at all. His thesis work was in game theory.




I haven't read up on game theory in a long time, does "best strategy to follow" mean "the strategy that maximizes my expected payoff for all the strategies my opponent could employ"?

So, an explicit counterexample would be my opponent picking a strategy of only re-throwing above .25. His expectation is then .25 * (0 + .25)/2 + .75 * (0 + 1)/2 ~= .40, so I should not rethrow if I get above .40 and below .50, even though it would raise my own expectation.

Am I thinking about that correctly?


Don't focus on expectation, focus on probability of winning.


Ok, I'm stumped. If I've parsed your description correctly, we get no info about our opponent's actions or the results until the end. Absent any ability to observe their strategy, it seems like you do want to maximize for expected value of your own actions, and I'm curious about the counterexample.

How do we maximize EV? A single throw's pdf is 1, for x in [0,1], so its EV is 0.5. The question is how to improve on a single throw by deciding to re-throw. A re-throw is independent and gives the same EV. We want a strategy that gives us higher cumulative EV. Say our strategy is that we have a threshold A, where we re-throw any result below A. Because x is uniform, the probability that we re-throw is also A. The cumulative EV of the strategy is A * EV(second_throw) + (1-A) * EV(keep_first_throw). Since we only keep the first throw for results in [A,1], the EV for that event (integrating x * pdf from A to 1) is (1+A)/2. So EV of the whole strategy is A/2 + (1-A) * (1+A)/2. It has max EV when A is 0.5, giving EV of 5/8.

So how do you do better?


I don't really get it either, but I think the idea is that you're supposed to somehow optimize on the assumption that your opponent is equally likely to be using any strategy. (To my mind this makes the problem very confusing because it's such an unrealistic assumption.) If you assume that the only possible strategies are "rethrow if my first throw is <= x", then you can define a function f(x,y) giving the EV where x is the threshold for your rethrow and y is the threshold for your opponent's. I guess you then integrate with respect to y (to get the EV for any given x assuming a uniform distribution for the opponent's choice of y) and then differentiate the result with respect to x to find the maximum? If the solution is along those lines that would explain the problem's supposed accessibility to good high school students.


The reason maximizing EV isn't the answer is that you care about having a larger number than your opponent, not about having the largest number possible.

Imagine we play a die game where whoever rolls the largest number wins. Which die would you rather play with?

1-1-1-1-1-10^250 or 2-2-2-2-2-2

The first die has an EV of about 1.67e249, the second die an EV of 2. Yet, the second die will win 5 out of 6 games against the other one.

To solve the problem, you must find the Nash equilibrium of the game. That is, you must find a strategy which the opponent cannot exploit, no matter what he does.


And no, you don't assume that every strategy is equally likely. You assume that the opponent plays optimally with full knowledge of your own strategy, which means you're looking for a Nash equilibrium http://en.wikipedia.org/wiki/Nash_equilibrium


That's fine if you already know game theory, but then the problem is not quite as accessible as you made out. You can't expect someone to just invent the concept of a Nash Equilibrium on the spot from the words "best strategy". (Although I understand that you may expect your interviewees to already be familiar with game theory.)


In this case, the candidate's thesis work explicitly dealt with game theory. In addition, this doesn't require much knowledge of game theory... if you think about the problem for a bit, you can re-derive the concept fairly easily.

Nash's brilliance was in proving that under reasonable conditions, a mixed equilibrium always exists, which is far less obvious.


The issue isn't the difficulty of the concept but the ambiguity of "best strategy".


Fine, best strategy given a perfectly rational opponent. There, ambiguity removed.


Well not quite, you also need to specify that the opponent assumes that you are perfectly rational, and that he assumes that you assume that he is, and that he assumes that you assume that he assumes that you are, and so on. A perfectly rational opponent would not assume that you are perfectly rational in the absence of any relevant information. Rather, he would assume that his opponent was equally likely to be using any strategy.


Makes sense. I didn't realize the problem description was for a game that would be played repeatedly with the same opponent -- i.e. one where you can gather information. If we were doing this in person instead of on a internet forum and async post/response we'd probably iron that out quickly. Thanks for elaborating.


There's two ways of interpreting that result, though. Maybe it shows that mathematical brain teasers aren't a good way of testing for mathematical ability.


I wouldn't describe this as a brainteaser. It's a well-defined math problem, not some BS question about manhole covers or barbers in Chicago.


I suppose it depends on the industry then... These are what we tend to call brainteasers in quant finance, and they're the type of problems you'll find in the book "Heard on the street" recommended above.


hey murbard is the answer to this 1-goldenratio? (.618...)

to get this I computed the function f(x,y) which is my expected value if I reroll if roll <= x and my opponent re-rerolls if roll <= y. let p be the optimal value. by symmetry f(p,p) = 0.5 so there must be a local minimum around that point. solving by taking a one sided derivative gives me (sqrt(5)-1)/2


Indeed it is, isn't it a nice result? :)

(to be thorough, you also need to show that re-rolling based on a single threshold dominates other strategies, but it's fairly intuitive and not too hard to prove)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: