you could assume that your commenter didn't read the whole line or your could try to understand that what they are asking is why you think that the lack of ethics enforcement of a text generating model means that the world is ending.
Personally, my take is that the lack of ethics enforcement demonstrates that whatever methods of controlling or guiding a LLM we have break down even at the current level. OA have been grinding on adversarial examples for like half a year at this point and there's still jailbreak prompts coming out. Whatever they thought they had for safety, it clearly doesn't work, so why would we expect it to work better as AIs get smarter and more reflective?
I don't think the prompt moralizing that companies are trying to do right now is in any sense critical to safety. However, the fact that these companies, no matter what they try, cannot avoid painfully embarrassing themselves, speaks against the success of attempts to scale these methods to bigger models, if they can't even control what they have right now.
LLMs right now have a significant "power overhang" vs control, and focusing on bigger, better models will only exacerbate it. That's the safety issue.