Hacker News new | past | comments | ask | show | jobs | submit login

So there are many failure modes possible here.

Yudkowsky's hard take off mentions "solving protein folding", nanomachines, persuading everyone to let them connect to the internet, infinite adaptability, etc. I think those are unrealistic, but obviously they still have a non-zero chance.

More probably bad possibilities are AI-Hitler (monomaniacal populist absolute leader) or AI-Stalin (manipulative, smart, absolutely paranoid, rising through the ranks) ... so something that's human-like enough to be able to connect with humans, to manipulate them, but at the same time less affected by the psychological shortcomings. (Ie. such an AI could spend enough time to cross-interrogate every underling, constantly watch them, etc.)

And yes, a very efficient immortal dictator is very bad news, but still bound by human-like limits.

And the big infinite dollar question is could this hypothetical AI improve on itself by transcending human limits? Let's say by directly writing programs that it has conscious control over? Can it truly "watch" a 1000 video streams in real-time?

Can it increase the number of its input-and-output channels while maintaining its human-like efficiency?

Because it's very different to run a fast neural network that spits out a myriad labels for every frame of a video stream (YOLO does this already, but it's not 20W!) and to integrate those labels into actions based on a constantly evolving strategy.

Sure maybe the hypothetical AI will simply run a lot of AlphaZero-like hybrid tree-search estimator-evaluator things ...

Anyway, what I'm trying to say is that our 20W efficiency comes with getting tried very fast, and using "fast mode" thinking for everything. (Slow mode is the exception, and using it is so rare that we basically pop open a glass of champagne every time.)




I agree that the most likely way an AI would take control involves social/political engineering, but that doesn't mean it will have human-like morals making it keep humanity alive once it doesn't need us or that it will have human-like limits.

>And the big infinite dollar question is could this hypothetical AI improve on itself by transcending human limits? Let's say by directly writing programs that it has conscious control over? Can it truly "watch" a 1000 video streams in real-time?

Even if its mind wasn't truly directly scalable, it could make 1000 short or long-lived copies of itself to delegate those tasks to.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: