Hacker News new | past | comments | ask | show | jobs | submit login

Worth adding that there is no contradiction in strongly believing both 1+2 are true at the same time.

I.e. Evil tech companies are just trying to maintain their control and market dominance and don't actually care or think much about AI safety, but that we are nonetheless at an inflection point in history because AI will become more and more scary AF.

It is totally plausible that evil tech got wind of AI Safety concerns (that have been around for a decade as academic research completely divorced from tech companies) and see using it as a golden win-win, adopting it as their official mantra while what they actually just care about is dominance. Not unlike how politicians will don a legitimate threat (e.g. China or Russia) to justify some other unrelated harmful goal.

The result will be people camp 2 being hella annoyed and frustrated that evil tech isn't actually doing proper AI Safety and that most of it is just posturing. Camp 1 meanwhile will dismiss anything anyone says in camp 2 since they associate them with the evil tech companies.

Camp 1 and camp 2 spend all their energies fighting each other while actually both being losers due to a third party. Evil tech meanwhile watches on from the sidelines, smiles and laughs.




AI Safety hasn't been divorced from tech companies, at least not from Deepmind, OpenAI, and Anthropic. They were all founded by people who said explicitly that AGI will probably mean the end of human civilization as we know it.

All three of them have also hired heavily from the academic AI safety researcher pool. Whether they ultimately make costly sacrifices in the name of safety remains to be seen (although Anthropic did this already when they delayed the release of Claude until after ChatGPT came out). But they're not exactly "watching from the sidelines", except for Google and Meta.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: