> They're a continuation of endless, open, 'possible', unfounded allegations, questions, and speculation.
I'm not entirely sure what this means. My intention was to better understand your mental model, which appears a bit sloppy, or at least more concerned with its conclusion than with its integrity as a rational process.
I guess what I'd say (constructively) is that if it is too much work to articulate your argument as a tree of probabilistic scenarios to each player and strategic moves which themselves have probabilistic outcomes, then I think that might be a clue that your might not actually believe your own argument.
One example I'd offer about how I think we can understand ourselves better by using probabilistic reasoning is this:
We routinely make important decisions based on imperfect understanding of our own motives or preferences. This is why the technique of flipping a coin to make difficult A vs B decisions is so powerful. The decision was difficult precisely because we expect to be equally happy with either outcome. Thus letting the coin toss result make the decision preserves our utility maximization (to the best of our knowledge). At times, if it lands on heads, we may feel regret, which can indicate that we actually preferred the other outcome. The practice is very illustrative of how opaque our own uncertainty can be to us.
So we should assume when considering the decision-making of those who we can't ask directly for details, that there was nearly always a fair bit of uncertainty behind every decision. The more external evidence there is that the person is deeply rational (as world leaders nearly always are) the more confident we can be that the decision was not guided by a deluded sense of outcome probability.
Thus, for actions that involve many steps taking place blind (without an eye on the outcome) we must assume either that the actor is indifferent to the outcome, or that there is some benefit to outcomes other than the most desired one, and that the potential costs are well understood.
We don't need to know everything about the actor's decisions or expected probabilities to reason about his actions, since we can learn a great deal by outlining the things we feel confident about and determining whether the other pieces of our theory seem to fit. This was the intention behind the questions I posed, to help us both scrutinize your view more thoroughly, since if you are right I'd very much like to agree with you.
I'm not entirely sure what this means. My intention was to better understand your mental model, which appears a bit sloppy, or at least more concerned with its conclusion than with its integrity as a rational process.
I guess what I'd say (constructively) is that if it is too much work to articulate your argument as a tree of probabilistic scenarios to each player and strategic moves which themselves have probabilistic outcomes, then I think that might be a clue that your might not actually believe your own argument.
One example I'd offer about how I think we can understand ourselves better by using probabilistic reasoning is this:
We routinely make important decisions based on imperfect understanding of our own motives or preferences. This is why the technique of flipping a coin to make difficult A vs B decisions is so powerful. The decision was difficult precisely because we expect to be equally happy with either outcome. Thus letting the coin toss result make the decision preserves our utility maximization (to the best of our knowledge). At times, if it lands on heads, we may feel regret, which can indicate that we actually preferred the other outcome. The practice is very illustrative of how opaque our own uncertainty can be to us.
So we should assume when considering the decision-making of those who we can't ask directly for details, that there was nearly always a fair bit of uncertainty behind every decision. The more external evidence there is that the person is deeply rational (as world leaders nearly always are) the more confident we can be that the decision was not guided by a deluded sense of outcome probability.
Thus, for actions that involve many steps taking place blind (without an eye on the outcome) we must assume either that the actor is indifferent to the outcome, or that there is some benefit to outcomes other than the most desired one, and that the potential costs are well understood.
We don't need to know everything about the actor's decisions or expected probabilities to reason about his actions, since we can learn a great deal by outlining the things we feel confident about and determining whether the other pieces of our theory seem to fit. This was the intention behind the questions I posed, to help us both scrutinize your view more thoroughly, since if you are right I'd very much like to agree with you.