Hacker News new | past | comments | ask | show | jobs | submit login

It’s pointless. They aren’t rational. Any argument you come up with that contradicts their personal desires will be successfully “reasoned” away by them because they want it to be. Your mistake was ever thinking they had a rational thought to begin with, they think they are infallible.





"Widespread robots that make their own decisions autonomously will probably be very bad for humans if they make decisions that aren't in our interest" isn't really that much of a stretch is it?

If we were going slower maybe it would seem more theoretical. But there are multiple Manhattan-Project-level or (often, much) larger efforts ongoing to explicitly create software and robotic hardware that makes decisions and takes actions without any human in the loop.

We don't need some kind of 10000 IQ god intelligence if a glitch token causes the majority of the labor force to suddenly and collectively engage in sabotage.


None of those projects are even heading in the direction of "AGI". The state of AI today is something akin to what science fiction would have called an "oracle", a devise that can answer questions intelligently or seemingly-intelligently but otherwise isn't intelligent at all: it can't learn, has no agency, does nothing new. Even if that can be scaled up indefinitely, there's no reason to believe that it will ever become an AGI.

If any of this can make decisions in a way that a human can, then I would start to question what human decision-making really amounts to.


For what it's worth, as the person at the top of this thread, I actually do take AI risk pretty seriously. Not in a singulatarian sense, but in the sense that I would be quite surprised if AI weren't capable of this stuff in ten years.

Even the oracle version is already really dangerous in the wrong hands. A totalitarian government doesn't need to have someone listening to a few specific dissidents if they can have an AI transcriber write down every word of every phone conversation in the country, for example. And while it's certainly not error-proof, asking an LLM to do something like "go through this list of conversations and flag anything that sounds like anti-government sentiment" is going to get plenty of hits, too.


> "Widespread robots that make their own decisions autonomously will probably be very bad for humans if they make decisions that aren't in our interest" isn't really that much of a stretch is it?

We already have widespread humans making their own decisions autonomously that aren’t in the best interest of humans, and we’re all still here.


Last time I checked, robots need lots of energy, batteries suck, and energy infrastructure is fragile and non-redundant. I think we'll be fine.



Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: