Hacker News new | past | comments | ask | show | jobs | submit login

Can you elaborate on that?



I'm reading that as: in a world of scarce resources, there is always going to something higher priority to deal with than extremely low risk catastrophic/existential events.


risk = probability * impact

- climate change: very high probability * very high impact

- terrorism: low/med probability * low/med impact

- thermonuclear war: low-ish(?) probability * extremely high impact

- superhuman AGI: ??? probability * ??? impact (likely very high impact, but + or - is uncertain)

In a standard rational decision-making framework, if: cost of action < risk, then: take action.

[Edit: Of course this is all a gross simplification. See more here https://en.m.wikipedia.org/wiki/Optimal_decision]


Well said, but I believe the “thinking” problem is because we are not generally wired to think in a statistical sense. We can assess the severity fairly well, but we have all kinds of cognitive biases that get in the way of accurate probability assessments. [1]

[1] https://en.m.wikipedia.org/wiki/List_of_cognitive_biases


That’s a frequently used formula for security, and it’s dumb there too, manipulated to explain away problems or get money.

Impact is more like “perceived impact to me” and risk is over a timeline that matters to the assessor of risk.

War hawks see nuclear war as medium probability, low impact. Most people see climate change as something impactful in a timeframe they don’t give a shit about, and terrorism is medium or high impact with medium or high probability in a timeframe that matters to them.


Sorry you feel that it’s dumb, because it’s not so much a formula as the basic concept (in words) of a expected value, which is perhaps the most fundamental concept in statistics, ML, decision theory, game theory, economics, medical decision and more. As I mentioned, the application of this concept gets much spicier (and more useful) once you start accounting for random variables, priors, model assumptions, etc. There are centuries of academic and practical thought devoted to these topics.

The “definition” of risk that I gave is a useful heuristic, even for my cat. He seems to calculate the probability of being caught on the kitchen counters, conditional on the number of people home and the time of day. He knows that he may get spritzed for violations and can balance this against the reward (simply chillin’ = stern warning; eviscerating a briefly undefended loaf of fresh challah = direct barrage from the water bottle). Clearly, this shrewd exploitation of Lady Luck is a learned behavior. (Whether is consciously strategic behavior or just serendipitously adaptive cat programming is actually immaterial in the game theoretic sense—mixed strategy Nash equilibria that result from evolutionary dynamics are equivalent to those that arise from strategic deduction).

Anyways, you’re right that the original motivation for probability theory and expected value was to “get money” (via gambling): https://en.m.wikipedia.org/wiki/Problem_of_points


My comment came off more snarky than intended. I usually see that concept depicted like science, but the details for the inputs are very much opinion.

Your cat is honest.

Humans take great leaps to dial in the answer they want!


> terrorism is medium or high impact

This implies that the best solution to terrorism is to abolish national news, in order to diminish its impact.


Perhaps abolishing cable news or meaningful adoption of the public mandate.

Think about how OJ Simpson ushered in our era of shit news.


> In a standard rational decision-making framework, if: cost of action < risk, then: take action.

Yes, but cost of action includes opportunity cost of not doing all the alternative actions for which you now suddenly don’t have the time.

Which makes rational decision making incredibly hard!

Example: focus all the effort on preventing climate change by reducing CO2 right now, or work on improving GDP for entire world so we have enough money and tech to mitigate the downsides?


The first sentence of the link I posted accounts for opportunity cost: “An optimal decision is a decision that leads to at least as good a known or expected outcome as all other available decision options.”

And there is much, much more than opportunity cost that makes decision theory difficult. However, the basic definition of expected value (e.g. both risk and reward)—and decision making on the basis of it (e.g. cost-benefit analysis)—is so universally useful that I believe every living creature relies on this concept in some way or another (even if things are decided for them via evolutionary fitness—which we probably shouldn’t let happen to us as a civilization).

As for your question, I reject the premise that there exists a binary choice between “focus all the effort on...reducing CO2 right now” vs. having “enough money and tech to mitigate the downsides” (an idea I’d also reject as frankly absurd—how will money and tech address a 50% species extinction rate?). Speaking of which, there are many classes of things that defy the concept of market value or utility—like the existence of species—which is where this framework can fall down. We can’t recognize their value as infinite without reaching absurdities (like I dunno, destroying everything else in the universe to preserve toucans or something). But preserving the future inhabitability of the planet for humans and other living beings is clearly of such high aggregate value that it is rational to devote significant effort towards mitigation (beginning with the actions that have the highest benefit/cost ratio).

By the way, I highly recommend the book “Beyond Growth” by Herman Daly as an introduction to ecological economics and the failure of GDP as a measure of progress.


> very high probability of climate change.

There is a 100% probability. It has been happening since there was a climate. The question is how much can we really affect it or even if we should.


Rephrase it as "very high probability of human-induced climate change" and the probability stays the same, but the impact of taking action changes completely.

If the change is human induced and on a very short timeframe (ie since the industrial revolution), then action on a similar scale should be possible to mitigate the impact.


I think something else is being said. Along the lines of our priorities and concerns can be highjacked as a means to a different end. Which is almost certainly happening.


Money is effectively priority, or at least control over the priorities of others. How do we ensure a greater return on investment for doing things that are actually in the public interest, so that people with money choose to use it in order to make people do actions that prioritize that?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: