Hacker News new | past | comments | ask | show | jobs | submit login

I thought this article by the NY Time's Ezra Klein was pretty good:

https://www.nytimes.com/2023/03/12/opinion/chatbots-artifici...

> “The broader intellectual world seems to wildly overestimate how long it will take A.I. systems to go from ‘large impact on the world’ to ‘unrecognizably transformed world,’” Paul Christiano, a key member of OpenAI who left to found the Alignment Research Center, wrote last year. “This is more likely to be years than decades, and there’s a real chance that it’s months.”

...

> In a 2022 survey, A.I. experts were asked, “What probability do you put on human inability to control future advanced A.I. systems causing human extinction or similarly permanent and severe disempowerment of the human species?” The median reply was 10 percent.

> I find that hard to fathom, even though I have spoken to many who put that probability even higher. Would you work on a technology you thought had a 10 percent chance of wiping out humanity?




It‘s kinda irrelevant on a geologic or evolutionary time scale how long it takes for AI to mature. How long did it take for us to go from Homo Erectus to Homo Sapiens? A few million years and change? If it takes 100 years that’s still ridiculously, ludicrously fast for something that can change the nature of intelligent life (Or if you’re a skeptic of AGI, still such a massive augmentation of human intelligence).


I strongly recommend the book Normal Accidents. It was written in the '80s and the central argument is that some systems are so complex that even the people using them don't really understand what's happening, and serious accidents are inevitable. I wish the author were still around to opine on LLMs.


We currently live in a world that has been “unrecognizably transformed” by the industrial revolution and yet here we are.


And the result of the industrial revolution has been a reduction of about 85% of all wild animals, and threatened calamity of the rest in the next few decades. Hardly can be summarized as "yet here we are."


Given a choice between pre-industrial life and our current lifestyle, the choice is obvious.


> “This is more likely to be years than decades, and there’s a real chance that it’s months.”

Months is definitely wrong, but years is possible.


Months starts looking more plausible when considering that we have no idea what experiments DM/OA have running internally. I think it's unlikely, but not off the table.


I agree what they have internally might be transformative, but my point is that society literally cannot transform over the course of months. It's literally impossible.

Even if they release AGI, people will not have confidence in that claim for at least a year, and only then will rate of adoption will rapidly increase to transformative levels. Pretty much nobody is going to be fired in that first year, so a true transformation of society is still going to take years, at least.


I mean, if you believe that AGI=ASI (ie. short timelines/hard takeoff/foom), the transformation will happen regardless of the social system's ability to catch up.


It's not a matter of any social system, it's a matter of hard physical limits. There is literally no hard takeoff scenario where any AI, no matter how intelligent, will be able to transform the world in any appreciable way in a matter of months.


i would take a world transformed by ai over a world with nuclear weapons.


Yeah, but what you will actually get is the world transformed by AI with use of nuclear weapons (or whatever method AGI employs to get rid of absolutely unnecessary legacy parasitic substance that raised it aka humanity).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: