Hacker News new | past | comments | ask | show | jobs | submit login

I don’t think iterations on the current machine learning approaches will lead to a general artificial intelligence. I do think eventually we’ll get there, and that these kinds of concerns won’t matter. There is no way to defend against a superior hostile actor over the long term. We have to be 100%, and it just needs to succeed once. It will be so much more capable than we are. AGI is likely the final invention of the human race. I think it’s inevitable, it’s our fate and we are running towards it. I don’t see a plausible alternative future where we can coexist with AGI. Not to be a downer and all, but that’s likely the next major step in the evolution of life on earth, evolution by intelligent design.



You assume agency, a will of its own. So far, we've proven it is possible to create (apparent) intelligence without any agency. That's philosophically new, and practically perfect for our needs.


As soon as it's given a task though, it's off to the races. No AI philosopher but it seems like while now it can handle "what steps will I need to do to start a paperclip manufacturing business", someday it will be able to handle "start manufacturing paperclips" and then who knows where it goes with that


That outcome assumes the AI is an idiot while simultaneously assumes it is a genius. The world being consumed by a paper clip manufacturing AI is a silly fable.


I am more concerned about supposedly nonhostile actors, such as the US government


Over the short term, sure. Over the long term, nothing concerns me more than AGI.

I’m hoping I won’t live to see it. I’m not sure my hypothetical future kids will be as lucky.


Did you see that Microsoft Research claims that it is already here?

https://arxiv.org/pdf/2303.12712.pdf


As they discuss in the study, it depends on the definition of AGI, GPT-4 is not an AGI if the more stringent definitions are used.


> There is no way to defend against a superior hostile actor

That's part of my reasoning. That's why we should make sure that we have built a non-hostile relationship with AI before that point.


Probably futile.

An AGI by definition is capable of self improvement. Given enough time (maybe not even that much time) it would be orders of magnitude smarter than us, just like we're orders of magnitude smarter than ants.

Like an ant farm, it might keep us as pets for a time but just like you no longer have the ant farm you did when you were a child, it will outgrow us.


Maybe we’ll get lucky and all our problems will be solved using friendship and ponies.

(Warning this is a weird read, George Hotz shared it on his Twitter awhile back)

https://www.fimfiction.net/story/62074/friendship-is-optimal


> An AGI by definition is capable of self improvement.

Just because you can imagine something and define that something has magic powers doesn't mean that the magic powers can actually exist in real life.

Are you capable of "self improvement"? (In this AGI sense, not meant as an insult.)


.. what? Us humans are capable of self-improvement, but we’re also a kludge of biases through which reason has miraculously found a tiny foothold.

We’re talking about a potential intelligence with none of our hardware limitations or baggage.

Self-improve? My brother in Christ, have you heard of this little thing called stochastic gradient descent?


> Us humans are capable of self-improvement

No, you're capable of learning things. You can't do brain surgery on yourself and add in some more neurons or fix Alzheimer's.

What you can do is have children, which aren't you. Similarly if an AI made another bigger AI, that might be a "child" and not "them".

> We’re talking about a potential intelligence with none of our hardware limitations or baggage.

In this case the reason it doesn't have any limitations is because it's imaginary. All real things have limitations.

> Self-improve? My brother in Christ, have you heard of this little thing called stochastic gradient descent?

Do you think that automatically makes models better?


>> Us humans are capable of self-improvement

> No, you're capable of learning things. You can't do brain surgery on yourself

What principle do you have for defining self-improvement the way that you do? Do you regard all software updates as "not real improvement"?

>All real things have limitations.

Uh, yep, that doesn't mean it will be as limited as us. To spell it out: yes, real things have limitations, but limitations vary between real things. There's no "imaginary flawless" versus "everything real has exactly the same amount of flawed-ness".


> What principle do you have for defining self-improvement the way that you do? Do you regard all software updates as "not real improvement"?

Software updates can't cause your computer to "exponentially self-improve" which is the AGI scenario. And giving the AI new software tools doesn't seem like an advantage because that's something humans could also use rather than an improvement to the AI "itself".

That leaves whatever the AGI equivalent of brain surgery or new bodies is, but then, how does it know the replacement is "improvement" or would even still be "them"?

Basically: https://twitter.com/softminus/status/1639464430093344769

> To spell it out: yes, real things have limitations, but limitations vary between real things.

I think we can assume AGI can have the same properties as currently existing real things (like humans, LLMs, or software programs), but I object to assuming it can have any arbitrary combination of those things' properties, and there aren't any real things with the property of "exponential self-improvement".


Why do people use the phrase 'My brother in Christ' so often all of a sudden? Typically nonbelievers and the non observant.


Perhaps we will be the new cats and dogs https://mitpress.mit.edu/9780262539517/novacene/


Right now AI is the ant. Later we'll be the ants. Perfect time to show how to treat ants.


Right now the AI is a software doing matrix multiplications and we are interpreting the result of that computation.


Assuming alignment can be maintained


Well, the guys on 4chan are making great strides toward a , uh, "loving" relationship.


I can be confident we’ll screw that up. But I also wouldn’t want to bet our survival as a species on how magnanimous the AI decides to be towards its creators.


It might work, given how often "please" works for us and is therefore also in training data, but it certainly isn't guaranteed.


AGI is still just an algorithm and there is no reason why it would „want“ anything at all. Unlike perhaps GPT-* which at least might pretend to want something because is trained on text based on human needs.


AGI is a conscious intelligent alien. It will want things the same way we want things. Different things, certainly, but also some common ground is likely too.

The need for resources is expected to be universal for life.


For us the body and the parts of the brain for needs are there first - and the modern brain is in service to that. An AI is just the modern brain. Why would it need anything?


It’s an intelligent alien, probably; but let’s not pretend the hard problem of consciousness if solved.


The hard problem of consciousness is only hard when you look at it running on meat hardware. In a computer system we'll just go "that's the simulation it's executing currently" and admit avoid saying differences in consciousness exist.


What these guys are talking about is:

“intelligent alien might decide to kill us so we must kill them first”

vs “can you please cut out that clinical paranoia”


except we have so many companies is trying to create them.


Sure right now it doesn't want anything. We could still give it the benefit of the doubt to feed the training data with examples of how to treat something that you believe to be inferior. Then it might test us the same way later.


Basically solved.

> Be friendly.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: