Hacker News new | past | comments | ask | show | jobs | submit login

> It's on the AI safety folks to keep up with building good analyses for SoTA. Suspending all future improvements to object detection until we can better understand fault models of existing systems could very well make everyone less safe.

Sure. That's certainly the responsibility of AI safety folks. The only problem is that there are too few of them relative to people trying to make AI more capable. How do we get society to agree to funnel more resources into AI safety?

> Slowing down doesn't address that problem at all. The answer here is regulation and accountability, which may or may not have the side-effect of slowing down deployments.

I don't think your viewpoint and the article's viewpoint (or at least Ding's viewpoint), despite its snappy title, are really that far apart. Ding isn't saying that slowing down is intrinsically good, but rather, in his words:

> If you’re a tech company, if you’re a policymaker, if you’re someone who wants your country to benefit the most from AI, investing in safety regulations could lead to less public backlash and a more sustainable long-term development of these technologies

which sounds pretty similar to what you're saying. Yes a likely outcome of that is slowing down AI development, but it's not the goal itself per se.

The gist of it is that we have a social problem. How do we coordinate to make sure that we develop AI safely? Because a free-for-all arms race where everyone cares only about the next shiny new capability seems really dangerous. Maybe the answer is regulation, but even then there are all sorts of questions of enforcement regimes and the like.




> How do we get society to agree to funnel more resources into AI safety?

Regulation and liability.

> I don't think your viewpoint and the article's viewpoint (or at least Ding's viewpoint), despite its snappy title, are really that far apart. Ding isn't saying that slowing down is intrinsically good, but rather, in his words:

Perhaps you're correct. In that case, the author's editor nukes the piece's credibility from orbit with the headline. If that's the case, then this article is "defund the police doesn't mean defund the police" levels of PR idiocy.


> Regulation and liability.

How do you coordinate across countries (in particular for AI, coordinate between China and the U.S.)? Because if you lose that coordination, you're back to all the classical problems of the prisoner dilemma.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: