Hacker News new | past | comments | ask | show | jobs | submit login

People frequently cause harm by using language. Being cautious about something that could potentially generate orders of magnitude more targeted harmful language seems reasonable.

In general, when people working full-time on a technology think it's dangerous and you don't see why, it's best to assume that they've spent a lot more time thinking of ways it could go wrong than you.




> In general, when people working full-time on a technology think it's dangerous and you don't see why, it's best to assume that they've spent a lot more time thinking of ways it could go wrong than you.

Or they could explain it to us in a way that's understandable. I see no reason to give them the benefit of the doubt when they've already thrown away so much goodwill.


Sometimes this works. It was pretty easy to explain what can go wrong with nuclear weapons so the public can understand.

Sometimes it doesn't. People have, for decades, expressed concerns about the risks of gain-of-function research on viruses, and approximately 0% of the public understood it until last year.

Many people think Facebook is harmful, and a few people predicted so 15 years ago. Facebook went right ahead and did it anyway, so we know that the warnings were valid. Would you have argued with "Facebook will become harmful" people 15 years ago, demanding they explain exactly what the harms will be? If so, you'd end up on the wrong side of history.


Sorry to reply so late to this.

Your points are entirely valid, but in this instance OpenAI - to the best of my knowledge - hasn't actually stated what any of the potential negative consequences are. I realize it's possible that they truly have uncovered some magic mystery, but why would they be the only ones to realize it when others are working on similar problems? And more importantly, why can't anyone articulate what these dangers are? I have heard a few scare-mongering arguments which sound like they were written for clickbait, but nothing substantial.

I realize you used to work/collaborate there so I'm sure you were exposed to these ideas and I respect your view, but my frustration is you still aren't stating what the bad consequences are. Is it really so nefarious that even mentioning it triggers some sort of scourge upon humanity? I just don't buy it, especially considering the managerial history (looking from the outside in) of OpenAI.


That's like deferring to people who work in the tobacco industry about what's dangerous or not with cigarettes.

Also, many of them are still under the impression they're making the world a better place at Facebook and Twitter. So no, let's not pretend technologists know what's best. And they don't understand language and society better than, say, George Carlin. They only think they do.


> That's like deferring to people who work in the tobacco industry about what's dangerous or not with cigarettes

It's not a symmetric bias. If someone selling you tobacco or ad spam tells you it's safe, one could reasonably be skeptical. If that same person voices specific concerns, it's more notable for coming from them.


Except they aren't specific concerns, it's just a generic "it's dangerous, we need to control it". Considering that they said the same about GPT-2 (and it's release ended up doing ... nothing), I think there's good reason to be suspicious of bias, because OpenAI being the gatekeeper is profitable for them.


Tobacco isn't the best example.

Imagine a company that gatekeeps a language feature, and they are staffed with Creationists or Scientologists. Is it notable if they define certain output as dangerous? No. Should you be skeptical? Yes. It's the same if they are staffed with Wokeists. In both cases they are defining what's "dangerous" according to their religion/ideology.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: