Hacker News new | past | comments | ask | show | jobs | submit login

>As far as a pandemic or nuclear war, though, I'd probably put it on more of the level of a K-T extinction event. Humans are doing some work on asteroid redirection, but I don't think it is a global priority.

I think it's better to frame AI risks in terms of probability. I think the really bad case for humans is full extinction or something worse. What you should be doing is putting a probability distribution over that possibility instead of trying to guess how bad it could be, it's safe to assume it would be maximally bad.




More appropriate is an expected value approach.

That is, despite it being a very low probability event, it may still be worth remediation due to the outsized negative value if the event does happen.

Many engineering disciplines incorporate safety factors to mitigate rare but catastrophic events for example.

If something is maximally bad, then it necessitates some deliberation on ways to avoid it, irrespective how unlikely seeming it may be.


Exactly. Taken to the limit if you extrapolate how many future human lives could be extinguished by a runaway AI you get extremely unsettling answers. Like the expected value of a .01% change of extinction from AI might be Trillions of quality Human lives. (This could in fact be on the very very conservative side, e.g. Nick Bostrom has speculated that there could be 10^35 human lives to be lived in the far future which is itself a conservative estimate). With these numbers even setting AI risk to be absurdly low, say 1/10^20, we might still expect to lose 10 billion lives. (I'd argue even the most optimistic person in the world couldn't assign a probability that low) So the stakes are extraordinarily high.

https://globalprioritiesinstitute.org/wp-content/uploads/Tob...


Actual people are vastly outnumbered by entropy fluctuations in the long run, according to any number of valid cosmological models. Their total population is greater than 10^35 by far, and does not depend on whether superintelligence is possible or likely to destroy the actual human race. That question makes no difference to almost every human who will ever have a thought or feeling.

You could say, well, that's just a theory -- and it is a dubious one, indeed. But it's far more established by statistics and physical law than the numbers Bolstrom throws out.


The last gas I passed had 10^23 entropy fluctuations in it. So you're saying those are more important than the 100 Billion human lives that came before them?


I'm saying that it's unscientific nonsense. Bolstrom has predicted that the number of people potentially in the future is between approximately 0 and 100,000,000,000,000,000,000,000,000,000,000,000. This isn't an estimate that can or should factor into any decision at all. 100B people are going to live in the 21st century. Worry about them.


I think discounting future lives because you can't precisely count them is immoral at best and actively evil at worst.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: