All this just creates an echo chamber. Reasonable discourse at the end of the day is limited discourse. The real spectrum of thought however includes abuse and profanity and perhaps aggressive tones.
IMO the best way to deal with this is to train an AI to recognize each of these elements and then make the author reformulate the comment until it's civil enough to be posted.
Give feedback like, the tone of this seems harsh, or it seems like you are using ad-hominem attacks, etc.
Part of an open process is to make this information public, like what comments were rejected and for what reason given by the AI, I guess it would probably use a scoring system or something like that. If you made the process public then it would not be a problem of it censoring anything that people could not agree should be censored.
IMO the best way to deal with this is to train an AI to recognize each of these elements and then make the author reformulate the comment until it's civil enough to be posted.
Give feedback like, the tone of this seems harsh, or it seems like you are using ad-hominem attacks, etc.