They basically defined AI safety as "AI shouldn't say bad words or tell people how to do drugs" instead of actually making sure that a sufficiently intelligent AI doesn't go rogue against humanity's interests.
Sure, they might, but what you see in practice in GPT and being discussed in interviews by Sam is mostly the "AI shouldn't say uncomfortable things" version of AI "safety".
How do you mean? Don’t see what OpenAI has in common with Catholicism or motherhood.