> There is an irreconcilable conflict between AI safety and AI openness. If you create a dangerous program, and you know it's dangerous, then it would be insane to release it.
If you remember OpenAI's creation, the whole idea was that AI safety comes from democratizing AI. Their idea of AI safety was AI openness.
It's like how some would describe the Second Amendment in the US — by democratizing these dangerous weapons, there may be more chaos but people will be safer from some overlord who holds all the dangerous weapons.
This isn't to say that I agree, but what you're suggesting as their mission is in fact antithetical to what they claimed to be their mission.
If you remember OpenAI's creation, the whole idea was that AI safety comes from democratizing AI. Their idea of AI safety was AI openness.
It's like how some would describe the Second Amendment in the US — by democratizing these dangerous weapons, there may be more chaos but people will be safer from some overlord who holds all the dangerous weapons.
This isn't to say that I agree, but what you're suggesting as their mission is in fact antithetical to what they claimed to be their mission.