Hacker News new | past | comments | ask | show | jobs | submit login

Do you remember the days the computing power that's in your pocket took up an entire floor of a building? Because I do.

If that is somehow the only barrier between humanity and annihilation, then things don't bode well for us.




Sure. Despite all that progress, computers still have an off switch and power efficiency still matters. It actually matters more now than in the past.


What you're arguing is "what is the minimum viable power envelope for a super intelligence". Currently that answer is "quite a lot". But for the sake of cutting out a lot of argument lets say you have a cellphone sized device that runs on battery power for 24 hours that can support a general intelligence. Lets say, again for arguments sake, there are millions of devices like this distributed in the population.

Do you mind telling me how exactly you turn that off?

Now we're lucky in the sense we don't have that today. AI still requires data centers inputting massive amounts of power and huge cooling bills. Maybe we'll forever require AI to take stupid large amounts of power. But at the same time, a cray super computer required stupid amounts of power and space, and your cellphone has leaps and bounds more computing power than that.


No I'm not arguing that. My point is that if an AI is trying to take over the world we can just turn it off, regardless of power budgets.

If it runs on millions of hacked devices, how do you turn it off? The same way any botnet gets turned off: virus scanners clean it up, the C&C servers get taken down, etc. This is not a new problem.

The usual response to this is to claim that superintelligence will develop so fast that we'll go from crappy chatbot to Skynet in 5 seconds flat, and nobody will have a chance to turn it off. That whole scenario is unmoored from reality: it assumes multiple massive leaps in tech that aren't anywhere on the horizon and where it's unclear why anyone would pay for such massive CPU overkill in the first place.


So how do you turn off terrorists? I mean we had a global war on terror that we've spent billions of dollars on, and they are still out there?

You keep acting like AGI is just going to be some other program. And yes, we do not have AGI yet. No planning, no continuous chain of thinking/learning at this point. But thinking that anti-virus would have a chance in hell again AGI is pretty weak when AV tends to fall apart pretty quick with a human behind the computer. Again, thinking of this as an application and not an adversary would be an incredible mistake. Taking out global adversaries that attack over the internet is near impossible, especially if they have shelter in an enemies foreign country. And this isn't going to be like a human where you kill the leader and it stops. People would be harboring copies for the lulz.

>unclear why anyone would pay for such massive CPU overkill in the first place.

It's also unclear why companies like Google have tens of thousands of engineers at times, but if the application produces useful results, and it's corporate masters think they'll make more profit from it then the operating costs they will gladly keep pouring coal in the furnace for it. And then in military applications one side will make more powerful AI because they fear the other side will make a more powerful AI and get an advantage. I mean we already spend billions of dollars a year upkeeping enough nuclear weapons to flash fry most of the population on earth.


AGI will be a computer program, unless you're imagining some entirely non-computer based form of artificial intelligence. And it will therefore obey the rules of normal computer programs, like vulnerability to SIGKILL.

Yes, you can assert that an AGI would be a superhuman level hacker and so no rules would apply. It gets back to the same place all discussions about AGI risk get to - a religious argument that the entity in question would be omniscient and omnipotent therefore it can do anything. Not interested in such discussions, they lead nowhere.

Terrorists aren't programs or even a well defined group, so I don't see the analogy.


As you mentionned, the deviced have a battery that lasts 24h.

You literally have to do nothing, and it will shut off after 24h.

Do you think people would plug them back in if they were working on killing us all?

On another note, I wonder what kind of metric could be used to show the "computing", if that's what is happenning, going on in our brains? This could be interesting to compare to the power consumption of the brain, and then the same thing with a computer, or gpt 4.

I'm fairly certain our brains process orders or orders of magnitude more than any computer running today - but that's all biased towards being a human being, and not pure computation, so much of the processing is "wasted" in running muscles and desciphering audio and visuals.


> lets say you have a cellphone sized device that runs on battery power for 24 hours that can support a general intelligence

I can accept that it would be hard to turn off. What I find difficult to accept is that it could exist. What makes you think it could?


You, a general intelligence operate on around 20 watts of power, so we could use that as a base floor. Analog inference is one of the areas being worked on that may massively lower power requirements.


We are also hypermobile generally flexible robots that can run on the occasional pizza.

That doesn’t mean we are anywhere near that level of performance.


OK, sure, in principle, somewhere in the universe, such a thing could arise. Why do you think there's a path to it arising on planet earth within human timescales?


Because humans keep pouring massive amounts of money into making this happen. When you invest dedicated effort in making something happen you greatly increase the probability that it happens. And in this case it is a reinforcing feedback loop. Better technology begots better technology. Intelligence begots more intelligence.

Evolution has to take a random walk to get where it is, it doesn't necessarily have a goal past continuation, intelligence shortcuts that. It can apply massively iterative efforts towards a goal.


Yeah, but they can't make it happen if it's impossible. If humans poured massive amounts of money into a perpetual motion machine I wouldn't expect it to happen. So what is it that makes you believe that artificial general intelligence is possible to get to at all?


Humans also have an "off switch". It's fail safe as well, they die of dehydration and starvation even if you do nothing. If that's too slow you can just shoot them in the head.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: