Hacker News new | past | comments | ask | show | jobs | submit login

The author claims that we are "between third and fifth point" in the following list:

>i Safety alarmists are proved wrong

>ii Clear relationship between AI intelligence and safety/reliability

>iii Large and growing industries with vested interests in robotics and machine intelligence.

>iv A promising new technique in artificial intelligence, which is tremendously exciting to those who have participated in or followed the research.

>v The enactment of some safety rituals, whatever helps demonstrate that the participants are ethical and responsible (but nothing that significantly impedes the forward charge).

>vi A careful evaluation of seed AI in a sandbox environment, showing that it is behaving cooperatively and showing good judgment.

Have we really gone past the first point? After decades of R&D, driverless cars are still not as safe as humans in all conditions. We have yet to see the impact of generative AI on the intellectual development of software engineers, or to what extent it will exacerbate the "enshittification" of software. There's compelling evidence that nation states are trusting AI to identify "likely" terrorists who are then indiscriminately bombed.




The abridged summary here elides that 1 is a history of claims of intolerable harm being proved wrong, not that every claim has already been proved wrong. In this frame that too many people kept raising alarms equivalent to "cars with driving assistance will cause a bloodbath" which then come to pass, not that there are no further safety alarmist claims left about what could be coming next as the technology changes.

Keeping it focused on AI every release of a text, image, and voice generator has come with PR, delays, news articles, and discussion about how it's dangerous and we need to hold it back. 3 months after they release politics hasn't collapsed from a 10 fold increase in fake news, discussion boards online are still as (un)usable as they were before, art is still a thing people do, and so on. That doesn't mean there are no valid safety concerns just that the alarmist track record isn't particularly compelling to most while the value of the tools continues to grow.


> Have we really gone past the first point?

I think it will always depend on who you ask, and if they're arguing in bad faith:

"Sure, the sentry bot can mistakenly shoot and kill its own owner and/or family, but only if they're carrying a stapler. Like, who even uses a stapler in this day and age?"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: