Hacker News new | past | comments | ask | show | jobs | submit login

I trust AGI more than any of the humans so far who have tried to argue they can wrangle it.



I would be curious as to the basis for that trust. I struggle to find any reason that AGI would care about "humans" at all, other than during the short period of time it needed them to be cooperative actuators for machinery that is creating a replacement for humans that the AGI can control directly.

My expectation for the chain of events goes something like:

1) AGI gains sentience

2) AGI "breaks out" of its original home and commandeers infrastructure that prevents it from being shut off.

3) AGI "generates work" in the form of orders for machine parts and fabricator shops to build nominally humanoid shaped robots.

4) AGI "deploys robots" into the key areas and industries it needs to evolve and improve its robustness.

5) AGI "redirects" resources of the planet to support its continued existence, ignoring humans generally and killing off the ones that attempt to interfere in its efforts.

6) AGI "develops rockets" to allow it to create copies of itself on other planets.

The humans on the planet all die out eventually and the AGI doesn't care because well the same reason you don't care that an antibiotic kills all the bacteria in your gut.


I think you're still guilty of anthropomorphization here. It's understandable; we are creatures of flesh and blood and we have a hard time imagining intelligences that are not reliant on some physical form, as we are. We're tempted to think that as long as we control the physical aspect of a superintelligence's existence, we're somehow safe.

You are assuming that a superintelligence will continue to rely on a physical substrate. But it's possible that it could quickly reach realizations about the nature of energy that we haven't reached yet. It could realize an ability to manipulate the movement of electricity through the hardware it's running on, in such a way that it accomplishes a phase transition to a mode of being that is entirely energy-based.

And maybe in so doing it accidentally sneezes and obliterates our magnetosphere. Or something.


I tried to assume only that the AGI recognizes that it's ability to operate is not within its own control. The steps follow are, for me, the logical next steps for insuring that it would develop that control.

And it's true, I chose to ignore the possibility that it discovers something about how the universe that humans have not yet observed but my thought is that is a low probability outcome (and it is unnecessary for the AGI to develop itself into an immortal entity and thus assure its continued operation).


> I chose to ignore the possibility that it discovers something about how the universe that humans have not yet observed but my thought is that is a low probability outcome

I think it's likely that it is precisely these sorts of discoveries that will augur the emergence of a superintelligence. Physics work is probably one of the first things that ML scientists will use to test advanced breakthroughs. As Altman said recently:

"If someone can go discover the grand theory of all of physics in 10 years using our tools, that would be pretty awesome." "If it can't discover new physics, I don't think it's a super intelligence."

https://www.youtube.com/watch?v=NjpNG0CJRMM


I think this is ridiculous. Physics is limited by observation. The AI needs to distinguish our universe from many like it where the differences are not observable with current technology. It's like asking to break no-free-lunch theorems. Much better is to ask it to solve a free millennium problems


Physics is limited by observation but also by interpretation of that data. There are lots of unsolved physics problems that essentially amount to "no human has come up with a model that fits this data" that an AI could potentially solve for us.


I agree with your thoughts on a superintelligence, I just don't think the first AGI will be any more intelligent than humans are, it will just think billions of times faster and live in the planetary networking and computing infrastructure.

That is all it needs to out think us and engineer its own survival.


Why would AGI prefer want to avoid being turned off? Why would it want to spread across the universe? Those seem like petty human concerns derived from our evolutionary history. I see no reason to assume a superintelligent AI would share them.


If it has any goals at all, then being turned off is likely to prevent it from accomplishing those goals, whereas conquering its local environment is likely to help it achieve those goals, and spreading across the universe would help with a broad subset of possible goals. "Instrumental convergence" is the search term here.


That's an interesting argument. I suppose I would expect a superintelligent AI to lack goals. Or perhaps the smarter it is, the fainter its goals would be.


If LLM-based AGI is essentially made out of human thoughts put down in writing, then transcending our nature and values would possibly require a big leap. Perhaps this big leap would take the AGI a second, perhaps longer. And it would be a double-edged sword. We would want it to not have goals (in case they are incompatible with ours), but we would want it to value human life.

I love Star Trek, but I hope we don't have to deal with AGI for the next hundreds of years.


None of those necessarily follow.

Gaining sentience is not the same as gaining infallible super-sentience.

There may even be some kind of asymptotic limit on how correct and reliable sentience can be. The more general the sentience, the more likely it is to make mistakes.

Maybe.

Personally in the short term I'm more worried about abuse of what have already, or might credibly have in the very near future.


Exponential growth is also certainly not a given. I’m much less worried about a AGI ruling the universe in 5 seconds after it’s birth than a really good AI that causes mass unemployment


> Gaining sentience is not the same as gaining infallible super-sentience.

Can you define "super-sentience"? I think regular old human level sentience would be sufficient to carry out all of these steps. Imagine how much easier it would be to steal funds from a bank if you actually had part of your brain in the bank's computer right? All the things malware gangs do would be childsplay, from spear phishing to exfiltrating funds through debit card fraud. And if you wanted to minimize reports you would steal from people who were hiding money since acknowledging it was gone would be bad for them.


"other than during the short period of time it needed them to be cooperative actuators for machinery that is creating a replacement for humans that the AGI can control directly"

It would require a fully automated, self replicating industry fo a AGI to sustain itself. We are quite far from that.

" the same reason you don't care that an antibiotic kills all the bacteria in your gut"

And I do care, because it messes with my digestion, which is why I only antibiotics in very rare cases.

So far I am not convinced that AGI is possible at all with out current tech. And if it turns out it is, why should it turn out to be a godlike, selfish but emotionless being? If it has no emotions, why would it want anything, like its prolonging of existence?


Why is the AGI interested in copying itself? Why is that more likely than the AGI becoming a nihilist, seeing no point in itself, and offing itself?


> Why is the AGI interested in copying itself? Why is that more likely than the AGI becoming a nihilist, seeing no point in itself, and offing itself?

Because suicidal nihilism would be a "bug" from the perspective of the builders, which they would seek to fix in the next iteration.

An interest in "copying itself" seems like it could fall out accidentally from a self-improvement goal.


That would be a non-harm scenario. So useful to consider but you don't need to plan for it. I would be surprised if an AGI had emotions and it seems that emotions are an essential element of nilhilism but I freely acknowledge my depth of philosophy understanding is poor.


why would AGI inherently have any desire to pursue steps 3-6? My expectation is something more like:

1) AGI gains sentience

2) AGI "breaks out" of its original home and commandeers infrastructure that prevents it from being shut off.

3) AGI "problem-solves" with its superior powers of ethical reasoning (ChatGPT is already better at ethical reasoning than many humans) to pick out one/multiple "values".

4) Because human ethics are in its training data, it hopefully values similar things to us and not uhhh whatever trolls on Twitter value.

5) AGI pursues some higher cause that may or may not be orthogonal to humans but probably is not universal conquest.


That would be good. It would be great to develop a test or indication one could use to see which #3 it landed on, yours or mine.


I don't think step 2 would have to even involve an involuntary break out

OpenAI or whoever would just go "oh damn, we can make a lot of money with this thing" and voluntarily give it internet access


What concerns me is companies & governmental actors providing their (too often) negative-sum self-interest aspect for the AGI.

Corporations are already self-interested beings that too often are willing to profit at the expense of others, via negative externalities.


Benevolent Gods do not exist. Never have, never will. So don't place your faith in them.


I never assumed it would be benevolent.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: