Hacker News new | past | comments | ask | show | jobs | submit login

Many AI researchers think our current trajectory towards AGI will be one of creating our misaligned overlords and the end of humanity as we know it. If that doesn’t change, you bet your ass a significant faction of humanity will go to war against anyone getting close to AGI, where AI researchers will be assassinated like nuclear scientists in Iran.



Eh, I’m not too worried. The current ML approaches are extremely unlikely to produce an AGI. We will at best end up with specialist systems.


Current approaches have already given us GPT3 (people on HN are still mostly unaware how much GPT3 is tested here and are unable to discern the difference), DALLE, and Copilot. 15 years is quick, and 15 years ago there were no iPhones. If you look at the computing and societal shift in the last 15 years, it will be scarcely imaginable what 30 years will bring.


Gato already achieved AGI with a transformer model months ago using a laughably low parameter count.

Transformers and scaling are all you need. It's only a short matter of time before human and super human generalization are reached in ML models.


Which creates at strong incentive not to tell anyone you're close to AGI. And to even actively say you aren't when you are.


I know this is a popular opinion among certain bloggers, but this presumes that a significant faction of humanity takes these blogs you read as seriously as you do.


It's not about if/when people take these blogs seriously. It's a race between the creators of AGI and when it becomes mainstream enough for a popular politician to make it the cause de jour and rally the populist base against an existential threat.


If it really is an existential threat why didn't a politician rally the population against AI in 1984 after the Terminator and War Games movies came out?

I guess because nobody would take them seriously and they'd look ridiculous?

You're positing at some point this is going to change but I'm not seeing how.

Nuclear weapons are already an existential threat and I don't see anyone rallying against it.


There was no GPT3 or DALL-E in 1984, and no way to viscerally convey the (presumed) capabilities of such systems to the average person. In practice, the average person is so disempowered that nothing will change. We will step into the transhuman era with utmost vanity.


I'm sure a malicious programmer could do a great deal of harm with GPT3 or DALL-E, but I'm still not seeing how these programs suggest that computers are going to take over the world. At some point it simply becomes an act of faith to assume technology is going to progress to the point where robots can self replicate, achieve sentience, and achieve autonomy from humanity. Phillip K Dick wrote a science fiction short story with such a premise entitled "second variety" in 1953.

It's not a new idea. It doesn't seem to be subject to evidence, because you can't really prove a negative, can you? If a breakthrough will never happen to allow machines to take over the world, there's no way to prove it.


"Thou shalt not make a machine in the likeness of a human mind." -- chief commandment of the Orange Catholic Bible


the alternative scenario is that an AGI will be in charge if a corporation like amazon and hundreds if millions of peoole will be subservient to it




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: