Hacker News new | past | comments | ask | show | jobs | submit login

And, yet, humanity is still around with more people than ever on earth.



If they'd wiped us out, we wouldn't be here to argue about it.

We can look at the small mistakes that only kill a few, and pass rules to prevent them; we can look at close calls for bigger disasters (there were a lot of near misses in the Cold War); we can look at how frequency scales with impact, and calculate an estimated instantaneous risk for X-risks; but one thing we can't do is forecast the risk of tech that has yet to be invented.

We can't know how many (or even which specific) safety measures are needed to prevent extinction by paperclip maximiser unless we get to play god with a toy universe where the experiment can be run many times — which doesn't mean "it will definitely go wrong", it could equally well mean our wild guess about what safety looks like has one weird trick that will make all AI safe but we don't recognise that trick and then add 500 other completely useless requirements on top of it that do absolutely nothing.

We don't know, we're not smart enough to know.


Exactly. Wasting large efforts on de-risking purely hypothetical technology isn't what got us to where we are now.


The people working on the Manhattan Project did more than zero de-risking about nukes while turning them from hypothetical to real.


At that time there was nothing hypothetical about them anymore. They were known to be feasible and practical, not even requiring a test for the Uranium version.


How is it not a double standard to simultaneously treat a then-nonexistent nuclear bomb as "not hypothetical" while also looking around at the currently existing AI and what they do and say "it's much to early to try and make this safe"?


There was nothing hypothetical about a nuclear weapon at that time - it "simply" hadn't been made but that it can be made within a rather finite time was very clear. There are a lot of hypotheticals about creating AGI and existential risk from A(G)I. If we are talking about the plethora of other risks from AI, then, yes, not all hypothetical.


I gave a long list of things that humans do that blow up in their faces, some of which were A-no-G-needed-I. The G means "general", this is poorly defined and means everything and nothing in group conversation, so any specific and concrete meaning can be anywhere on the scale from the relatively-low generality but definitely existing issues of "huh, LLMs can do a decent job of fully personalised propaganda agents" or "can we, like, not, give people usable instructions for making chemical weapons at home?"; or the stuff we're trying to develop (simply increasing automation) with risks that pattern match to what's already gone wrong, i.e. "what happens if you have all the normal environmental issues we're already seeing in the course of industrial development, but deployed and scaled up at machine speeds rather than human speeds?"; to the far-field stuff like "is there such a thing as a safe von-Neumann probe?" where we absolutely do know they can be built because we are von-Neumann replicators ourselves, but we don't know how hard it is or how far we are from it or how different a synthetic one might be from an organic one.


Some risks there are worth more effort in mitigating them than others. Focus on far out things would need more than stacked hypotheticals to divert resources to it.

At the low end, chemical weapons from LLMs would, for example, not be on my list of relevant risks, at the high end some notions of gray goo would also not make the list.


What sorts of de-risking are you referring to?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: