Hacker News new | past | comments | ask | show | jobs | submit login

I lost total interest in every “Stop” argument when Elon Musk started a company to develop AGI during the pause that he signed on to and promoted. Yann LeCunn said that this was just an attempt to seize market share among the big companies and he couldn’t have been more right.



That is transparently what this letter is for most of its signatories. They lost the race (in one way or another) and they want time to catch back up. This is as true for the academics who made big contributions in the past but missed out on the latest stuff as it is for Elon Musk.


The joke will be on them when our open-source powered mechwarriors made of bulldozer parts come for them.


The joke will be on us all when your open-source powered mechwarriors start destroying dams and power plants and even take a stab at launching some nukes, because the models figured this is the most effective way to "get them", and weren't aligned enough to see why they should take a different approach.


While my tongue was firmly in my cheek, and still is somewhat, whats to stop (for example) Boston Dynamics and OpenAI collaborating and beating us to your nightmare scenario first? And is that a situation you would prefer?


Of course not. I'm leaning towards Yudkowsky's recommendation, which is to put a pause on all of it.


I have a bridge to sell Yudkowsky and everyone else who believes that an "AI pause" is actually something that can happen.

Global treaties pretty much all occur with implied threats of violence, and we don't have enough guns to force this across India, China, the middle east, the EU, and the US. Major AI development happens in all of those places.


You're reading this wrong. Yudkowsky isn't saying that this will happen; in fact, it's apparent to him as much as to everyone else how unlikely is that.

Yudkowsky's point is that this is the minimum and least difficult thing we need to do. LeCun and others on both sides of the "AI regulation" argument are busy arguing the color of the carpet, while the whole house is on fire.


I think we all also fundamentally disagree about whether the house is on fire at all. History is littered with examples of Luddites yelling about how the bottom of a technological S-curve is actually an exponential and how we Must Do Something to prevent a catastrophe.

With possibly one exception (biological weapons, research on which seems to have few positive externalities) they have always been wrong. I did not mean nuclear weapons - we are seeing significant negative societal fallout from failing to invest in nuclear technology. So no, the house is almost certainly not on fire.


> History is littered with examples of Luddites yelling about how the bottom of a technological S-curve is actually an exponential and how we Must Do Something to prevent a catastrophe.

Can't think of any such examples, do you have some?

Of the two that come to my mind:

- The only thing the Luddites were yelling about is having their livelihoods pulled out from under them by greedy factory owners who aggressively applied automation instead of trying to soften the blow. They weren't against the technological progress; the infamous destruction of looms wasn't a protest against progress, but a protest against treatment of laboring class.

- The "Limits to Growth" crowd, I don't think they were wrong at all. Their predictions didn't materialize on schedule because we hit a couple unexpected technological miracles, the Haber–Bosch process being the most prominent one, but the underlying reasoning looks sound, and there is no reason to expect we'll keep lucking into more technological miracles.


Lots of people recommend a pause, Yudkowsky explicitly argues that a pause is entirely inadequate and recommends an extreme anti-AI-research global crusade, so its kind of odd to point to him the way you have as a pause-recommender.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: