> if social media offered rewards for identifying such abuse
couldn't this also be gamed for false positives, and create an incentive for groups to form smear-for-profit entities which try to spin small communities as 'bad' to the benefit of others?
When life is a game, everything is gameable. However, as humans are good at adapting, you are playing one large mass of people against a lesser mass of bad players. So whilst you may get gaming of the system, that would still stand out to others who would investigate.
But let us not forget, things like posting IP and client and other such details that may prove useful in identifying such abuse, may well elude public investigation. So it would be these niche area's in which Twitter could then focus upon.
However, as an example of a slightly comparable problem - wiki editing and the weighted credibility of editors over time, we may well end up creating a hidden social media based upon a small subset of society and perhaps yielding a bias in some direction or another in judgment of what is abuse.
Totaly, that would work. Let people impose their world view upon themselves via filters instead of imposing it upon everybody, would solve so much and tick all the free speech boxes.
But with the ways social media churns upon spam/botnets, it would place a huge load upon people self filtering.
...
> if social media offered rewards for identifying such abuse
couldn't this also be gamed for false positives, and create an incentive for groups to form smear-for-profit entities which try to spin small communities as 'bad' to the benefit of others?