How are you evaluating this? Are you including the truth of the Facebook post, whether moderators correctly/accurately act upon the flagging, whether users choose to stick on the platform after seeing the content, whether users stop believing in any objective truth, or something else?
Community notes only does fact-checking, but moderation has the ability to reduce the activity of bad actors. They serve 2 different purposes from where I stand.
How are you evaluating this? Are you including the truth of the Facebook post, whether moderators correctly/accurately act upon the flagging, whether users choose to stick on the platform after seeing the content, whether users stop believing in any objective truth, or something else?
Community notes only does fact-checking, but moderation has the ability to reduce the activity of bad actors. They serve 2 different purposes from where I stand.