Hacker News new | past | comments | ask | show | jobs | submit login
Why do companies with unbounded resources still have terrible moderation? (moultano.wordpress.com)
172 points by moultano on Oct 4, 2019 | hide | past | favorite | 146 comments



There are two classes of users: normal people who want to use your service and people who actively weaponize it for their own ego. Those latter users have all the time in world and can defeat any attempt to stop them. This is because, online, time is the only resource that matters. You have a limited amount of this resource. Your company has a limited amount of this resource. Your normal users have a limited amount of this resource.

There is a discussion below about Flat Earthers having every right to speak -- and I agree. And in fact, I doubt anyone would truly care about Flat Earthers having their Flat Earth forum or even adding an occasional Flat Earth comment whenever NASA posts a round Earth photo. That's never been the problem. The problem is that some users don't stop there -- they're on a mission to change the world. They going to abuse your platform to the extreme to promote their views. Most big platforms right now have a huge mess of people tossing free bytes at each other.

If a small group of dedicated people wanted to ruin Hacker News for the rest of us, I have no doubt in my mind that they could. No amount of moderation could stop them. I've seen it first hand a few times with other services.


>They going to abuse your platform to the extreme to promote their views.

Facebook and twitter _encourage_ people to abuse their platform, that's why you can write 100 tweets from one account on a single day and they don't try to stop you at all, they know all that content will show up in their statistics, it will also make other users reply to those comments and so on; so in the end they have all incentives to keep their platform filled with people enraged with each other.

>If a small group of dedicated people wanted to ruin Hacker News for the rest of us, I have no doubt in my mind that they could.

I'm not so sure, unlike Facebook HN is not a for-profit (Ycombinator is, Hacker news is not) so something as simple as only allowing comments from user accounts created before the attack started would mitigate the problem until a better solution is slowly deployed.

Also, Facebook cannot ban an IP because other non-attacking users may be using that very same IP, a little site like HN loses nothing by banning offending IPs, a site like Facebook would get a gigantic backlash that would hurt the bottom line: Their valuation.


IP banning is not effective in an era of disposable VPNs and Tor.

I mean you could try to keep up with Tor exit nodes and VPN gateways but pretty soon you'd be banning all of Azerbaijan or something.


A lot of users also post from mobile devices, which is usually behind a carrier-nat, which tends to swap the address very, very often


IP banning has lost a lot of its effectiveness when ISPs started putting people behind NATs and recycling their IPs (which was a long time ago), and further lost effectiveness with the advent of smartphones and mobile Internet connections.


Yeah, I stand corrected


Yeah and regardless of their efficiency, there are people who use dynamic IP. (not by choice)


Ip banning for new accounts together with creation time exceptions seems pretty good thought: almost all existing users would be unaffected. Only costs you growth untill you find some extras to put into the account creation process.


Which "extras", though?


You charge people 10bux. Something Awful is still chugging away on that model.


For a given value of Chugging Away, sure.


>a little site like HN loses nothing by banning offending IPs

They would gain nothing either, since you get a new IP address every time you reset your router or connect to 4g.


No, something would be gained. It would increase the energy required to continue their campaign. There are many people who won't put forth that energy.


If you haven't experienced it yourself it can be difficult to grasp exactly how much energy people will put towards harassment. Coupled with infinite creativity and no shame even a single person can do a lot of damage.

My original point is the equation is actually the other way around; mitigations on attacks will ultimately require more energy than the attacks themselves. For you it's exhausting, for them it's exhilarating.


The latter type who gain an outsized dopamine shot from whatever game your platform is tend to be what keeps your platform going and drives your revenue. It’s just that some of them you like. There’s a big drive to optimize your platform to those kinds of lowest common denominator users until it is ruined and worthless. Just A/B test maximize some metric to make sure it happens.

You can fight back by intentionally disadvantaging the top percentiles of an important metric, finding out a few things that are the most common complaints with what you are and making maintenance of those core values, and limiting the amount of effort you put into being generally appealing.


They couldn't ruin HN because of the system of voting, flagging, and high karma requirement for downvoting.

And this applies in general, the only working way to moderate is distributing the effort between users who are interested in the final result.

Ideally big sites would not have one single system for moderation, but multiple lists so that if two groups keep flagging each other they get separated and don't see other groups posts, instead of one group having to leave the site.


You haven't seen how the bots have ruined similar subreddits. They either hijack old accounts with karma, or they create accounts and age them manually, farming karma by reposting content or hopping onto a repost and reposting the top comment from the original post.

Being able to sway opinion is big business for them and they are always coming up with new ways to do it while seeming as genuine.


Good point about bots, i am happy that hn has separate protection against them and other methods of self-upvoting.


I don't know.

HN has plenty of misinformation posted on it constantly. Countless posts that are objectively wrong, and in many cases it is done intentionally. I have seen lies supporting eugenics, lies about how quantum computing is a fake conspiracy, etc etc. And yet the mods only jump in when you're impolite by referring to OP with "you" and "you're" because it is a "personal attack". They are optimizing on the wrong problem, IMO. I would rather discuss with people who are snide rather than with liars who have an agenda.

HN has its own "fake news" problems. And this is the society we are living in; you can say anything as long as it doesn't offend or hurt someone's feelings. Even if it is objectively fake horse shit.


In your example if someone pointed out that quantum computers required some equations to hold with far greater precision than anything measured, so there is small but non-zero probability that it will turn out quantum computers can not work, would it be a lie that moderators should delete?

I believe it's not moderators job to decide what is a lie and what is not, if you see a lie, write a truthful comment refuting it, that way you can convince people instead of trying to silence.


> I believe it's not moderators job to decide what is a lie and what is not

codesushi42 didn't say it was. It said the system you describe still allows lies, myths, and misinformation to spread. The only mention of moderators was in the context of chastising the poster who calls the misinformation out.

> if you see a lie, write a truthful comment refuting it, that way you can convince people instead of trying to silence

The tools you referred to are tools to silence, so you are "trying to silence" every time you flag or downvote a post. As for refuting the misinformation, I regularly see posts doing this downvoted while the misinformation is upvoted[1]. This is particularly true in discussions on contentious topics. Thus, I'm not nearly as certain as you are that these tools would protect HN from a sufficiently dedicated group.

[1] At least once, because I downvote it and the comment remains black.


> The only mention of moderators was in the context of chastising the poster who calls the misinformation out.

Obviously we don't chastise users for correcting misinformation. We chastise them for breaking the site guidelines. How could we do otherwise?

There's an important insight here. Correcting misinformation is when it's most critical to follow the site guidelines. That's not so easy, because misinformation can be irritating and even triggering. But if you react to it with name-calling, flamewar, personal attack or things like that, then you discredit your comment and—if your position is correct—you discredit the truth. Then you're not only making HN worse, you're making the world worse.

The thing to try to remember is that while you may not owe the other commenter any better, you owe the community better, and you owe the truth better. The one who knows more has more responsibility.

https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

https://news.ycombinator.com/newsguidelines.html


Exactly this. HN is a cesspool of misinformation. Whether it comes from people pushing an agenda, or just plain ignorance. I take everything said here with a grain of salt, and fact check everything. Because I have seen some very creative and convincing misinformation here, that when researched doesn't hold up under scrutiny.

My point is that people casually browsing this place are going to be infected by the misinformation and spread it to others. All while the mods overfit on whether someone's response wasn't polite enough, which in a lot of cases is biased to begin with.

Stack Exchange is a much, MUCH more reliable source of information than the horse shit I see pranced around here on a daily basis. Their moderation system actually works because it empowers knowledgeable users.


I don't find this to be the case. Usually, when someone is wrong, they are quickly corrected. So when reading a thread, I rarely come away with an incorrect belief.

Maybe some example would help me understand what you mean. Could you link to some clear misinformation on HN that wasn't convincingly refuted elsewhere in the thread?


Plenty examples here, even with people coming out of the woodwork to support eugenics under false pretenses: https://news.ycombinator.com/item?id=20542738

Look up some accounts (jlawson) and you will find they consistently post right wing BS here.

Or just search HN for eugenics and see all the crazy claims made...

Plenty other examples, but due to ignorance, like someone giving a long convincing explanation of the thermodynamics behind the efficiency of steam engines which sounded entirely convincing. But was complete utter horse shit once researched, and no one refuted it. But it was upvoted plenty.


I looked in the places you suggested but none of them seem like cesspools of misinformation.

Political opinions from across the right-left spectrum, if presented in a civil and substantive way and without fake facts, are not against the rules here. Extreme opinions tend to get downvoted pretty quickly, though.


I guess you didn't look hard enough. Because the top comment is a lie:

If you outlaw this, people like me (who think inflicting stupidly and ill-health on their children by withholding these technologies is morally wrong) will just fly to Singapore. Even if you could enforce this, not every country will. 5+ standard deviation increases in many traits is on the table

Nope, 5+ std deviations is not on the table.


You might disapprove of the sentiment, but that does not appear to be a lie. A lie has to be factually incorrect. See https://en.wikipedia.org/wiki/Lie


Claiming 5+ std deviations is a statement, not a sentiment. The author cites a paper that makes no such claim. That counts as a lie.


The citation by Gwern mentions a number of traits which could plausibly be improved 5 SD, like height (which would be +20 inches -- ie, a 7'5" man).


Height has nothing to do with intelligence, which is what the discussion thread is about.


I think pointing to evidence that many traits are susceptible to +5 SD increases by genetic screening is reasonable in the context a discussion about intelligence, even though +5 SD change in intelligence hasn't been demonstrated.

+5 SD change in intelligence (ie, average IQ of 175) would be amazing, but it's not obviously impossible. So it's "on the table" in the sense that cities on Mars are "on the table". It's not prohibited by any known laws of nature, and some experts are working on it.

There are lots of actual lies in circulation, so I encourage you to reserve the use of the word for things that are demonstrably false, not just ambitious claims that one might be skeptical of.


That was never shown, it is you who are stretching what can reasonably be claimed with basically no evidence supporting such nonsense. None of what you said was in the original source.

Stretching the truth like this is lying, whether you like it or not.


You seem to have some kind of misconception about eugenics, the thing that everyone agrees to be unacceptable is "improving" mankind by forcing some people to not marry or to not have children. The thing talked about in that thread is voluntary modification of ones own dna. Deleting comments simply supporting this would be crazy.

HN is not a crowdsourced knowledge base like stackexchange or wikipedia, to expect high percentage of accuracy on all posts, it is place of conversations and having people with different viewpoints makes it more interesting. For the thermodynamics example did you post the correct version after researching? Or maybe you found it too late when conversation have already moved on? Unfortunately that's the way all conversations work in real world too, best arguments come after it.

I am not saying that hn voting is the perfect system, but it is way better than the system where moderators check if comments are true or "right wing BS" because then instead of discussions you'd get echo chamber of everyone agreeing with moderator, and it's not guaranteed that it will be left wing echo chamber you wish for.


We post thousands of moderation comments about things other personal attacks—though of course personal attacks are not ok on HN. I don't understand this reasoning where, if X is not ok, X must be the only thing moderators care about. Surely it is not so hard to understand that we have more than one rule.

It's not against the rules to be wrong, though. How could this be otherwise? We don't have a truth meter.


I don't understand this reasoning where, if X is not ok, X must be the only thing moderators care about.

That is your reasoning, not mine.

It's not against the rules to be wrong, though. How could this be otherwise? Do you think we have a truth detector?

This smacks of the same kind of arrogant obliviousness that is insidious at Facebook and Twitter. Do you really think misinformation is not a problem there either?

You counter the problem with a reputation system that actually works. See Stack Exchange. Also with better moderation on your effort.

Maybe you should try searching HN for eugenics and see which gems you come across.


> That is your reasoning, not mine

It's what you seemed to be saying with "And yet the mods only jump in when".

While I have you: you have a long history of breaking the HN guidelines by arguing in the flamewar style, directing acerbic swipes and personal insults at people, and so on. We've asked you many times to stop:

https://news.ycombinator.com/item?id=21105692

https://news.ycombinator.com/item?id=20709082

https://news.ycombinator.com/item?id=20559290

https://news.ycombinator.com/item?id=17932924

https://news.ycombinator.com/item?id=13605136

If you keep ignoring our requests instead of fixing this, we're going to ban you, just as we would any user who persisted in abusing HN this way. Other people posting bad things does not make it ok, and there's zero conflict between correcting misinformation—if that's what you're trying to do—and following the guidelines.

https://news.ycombinator.com/newsguidelines.html


This whole post repeats everything I've been saying while also threatening me with the ban hammer. I am saying that you are missing the big picture. There are much bigger problems here than school yard insults that go unaddressed here, as mentioned multiple times.

I would take impoliteness over toxic ideology backed by fake news any time. Listen to George Carlin much?

This whole line of thinking exemplifies why tech bros at places like Facebook and Twitter are so tone deaf to this whole problem.

You draw the line at "LMAO" but allow pro eugenics posts through?

Here's one in 2018:

Eugenics has a bad name, but it is scientifically valid. We now know that genetics play a huge role in life outcomes (e.g. adult IQ appears to be 70-80% heritable) and, for better or for worse, the eugenicists were right, even if their methods were morally wrong. We don't focus enough on positive eugenics: getting smart, non-violent and conscientious people to have large families. Right now we do the opposite: smart folks feel all sorts of pressures to have a small or no family due to careers, the expense of elite schooling, the environment, etc. The opening scene of Idiocracy nailed it.

https://news.ycombinator.com/item?id=17237184

That post is one big fucking lie. Was it flagged? Nope.

I am beginning to really loathe the hypocrisy of techies of your ilk.


You're assuming that we see everything that gets posted here. That's far from the case—there's far too much—so it's a non sequitur to conclude that if a bad comment shows up unmoderated, moderators are somehow allowing it. The likeliest explanation is that we didn't see it. Software can do some things, but I don't know how to write software that determines whether comments are right or wrong and kills the wrong ones. Do you?

On a large, open internet forum like HN, any banned user can create a new account and walk back in through the front door in 30 seconds—and many do. There will always be plenty of bad stuff getting posted. We do as much as we can, but we rely on users to flag comments and/or email us in egregious cases. As far as I can tell, you've never done either of those things, so I'm not really feeling your good faith here.

If the game is to find one shitty unflagged comment and pass it around as proof of moderator malfeasance, that's not a very interesting game. What's more interesting is that you had to go back over a year to find it. If anything, that rather suggests that your example is evidence of how little of that there is on HN, relative to other large open forums.

Meanwhile there are numerous cases of you treating others aggressively and being, frankly, an asshole on this site. You euphemize that as "impoliteness", but it's worse than mere impoliteness. It is community-poisoning. "I would take impoliteness over toxic ideology" is a non-argument—there is no good reason to have either. Using one to justify the other is bogus, and using it to justify your own mistreatment of others is cheap.

If you sincerely want to participate in the community here and do what you can to make it better—by following the guidelines, treating fellow community members respectfully, flagging egregious comments, and so on—that's great. You are welcome. But if you really don't want to do that, please don't post here. In particular, please don't take wrong comments by others—or comments that you feel are wrong—as an excuse to vent aggression.


Your whole post is one long and droning denial. I didn't have to go back a year to find an example. That's just what HN gave me the fastest. There are plenty of examples. Maybe you should try looking harder yourself.

HN has a problem with fake news. So does Facebook. And Twitter, and so forth.

Instead of saying "we can do better", you've taken the route of complete and utter denial. And in a kind of off the cuff, blame shifting way.

Just like so many other tech bros in SV. Completely unprofessional and unhinged when faced with real issues. You're enjoying the club, I guess.


Add then into the mix bad actor nation states who want to cause rife in your society, to disrupt or destabilize your democracy, who pay people to weaponize your platforms.


> Most big platforms right now have a huge mess of people tossing free bytes at each other

Which of course suggests the obvious solution of not letting the bytes be free any longer. Make users who want to post on your site pay for it.


So only the people fanatical enough to pay gets to express their opinion and the regular people just don't bother?


> So only the people fanatical enough to pay gets to express their opinion and the regular people just don't bother?

Why do you think that only the fanatical people would pay?


Because your content isn't that good.

I say that with complete certainty. The vast majority of content is such crap I've never even heard of it, so I'm certainly not going to pay for it. Therefore, the only people who will pay for your content are a small group of fans (the fanatics) who both know you exist and are willing to do something about it. Calling it a drop in the bucket is grossly overestimating things.


> Because your content isn't that good.

If the content of the site isn't that good, why is it a problem if the site becomes unusable?

And remember, we're not talking about my personal content. We're talking about sites like Twitter, or HN for that matter. If you don't think the content here on HN is that good, why are you here? Would you be ok with HN becoming unusable because it gets hijacked by people who don't care about adding value to the site but just want to push their personal agendas?


Make the bytes free, and the accounts cost resources.

Resources can be:

- time (come back tomorrow and next week to click this button and the account is yours)

- effort (captcha like, or maybe even better some time at meta-moderation: Is this a good answer?)

- monetary

In any case, if accounts are no longer free then having an account in good standing will be worth something.

Then, next time mods ban anyone it means there is a cost for them before they come back.


Not just fanatics, but anyone with financial backing - declared or not.


My view on this is that it's the people reading who should be paying. We don't make novelists or musicians pay for their audience.


Yes we do. It's virtually impossible to break a writer/band/act without a significant marketing budget.


At this point I think most people are concerned about hate speech policies. Normal opinions are censored through these rules, and the claims of harm are rather dubious.

Why should companies be trusted to enforce elusive controls on our speech? And is it fair that tech companies act like editors and still get Section 230 liability protections as a tech platform that no editor gets?

The end result of enforcing political correctness is preference falsification, which have lots of very negative effects. This is a good video on it:

https://www.youtube.com/watch?v=xzjqjU2FOwA


This is why moderation should stick to the "clear and present danger" standard established by US courts. There has to be a credible threat of violence directed at a specific individual. "Hate speech" is too hard to distinguish from political speech. Don't try to do it in advance. Prior restraint is bad.

Banning people from being excessively annoying over a period of time is perhaps an option. That's after the fact, not before.


That may not be optimal for the site, though. Many users will leave a site before it reaches the level of violent threats.

The goal of moderators (on most sites) is neither to maximize speech nor to protect people's feelings. The goal is to keep people engaged with the site, and that of necessity means balancing the people who wish to be annoying with those who will leave when their annoyance gets too high. That gets the maximum engagement, which usually goes hand in hand with maximum revenue.

Even on sites whose goal is divorced from revenue, many will find that Gresham's Law of Comments applies: bad speech drives out good. Hate speech tends to drive away people from the topic at hand, because people respond to hate speech as if it were a threat, even if it isn't immediately physical violence. Responding to that dominates the conversation, and then people interested in the original conversation leave.

Good user controls can help, but ultimately a site must keep its eye on the bottom line. You may have a hard time distinguishing hate speech from valid political speech, but users know when they're not enjoying the site, and leave. It's up to the site's designers to pick their target audience and encourage them to stay. And if that target audience is "everybody, including people who want to engage in hate speech", they may find that "nobody, except people who want to engage in hate speech" is what they end up with. Which may well be what they want, but for those who don't, they need to recognize that and take steps.


Why should the bar be so high? These are private forums, after all. If I don't want someone who exhibits antisocial behavior in my community, I shouldn't have to allow it.


Or, you can go the other way, and just ban all political speech on your platform. That’s easy too! And makes perfect sense for some platforms (e.g. ones intended mainly for children.)


US courts also provide other rules, such as private property, which allow people to moderate in fairly creative ways. Whereas in many of these platforms, and Twitter in particular, there's very little.


This is morally defensible, but from a business point of view, the likes of Twitter need to be seen doing something about "hate speech".

Specifically this means suppressing the alt-right, who are the most problematic causes of bad PR. Other groups - Burmese Islamophobes, for example - might get silenced too, but out of a desire to have consistent rules rather than a clear business need or a moral imperative to prevent genocide in Myanmar.


The article concludes with:

1. Effective moderation can only come from others steeped in the particular cultural context they're part of.

2. Effort should be given to moderate to give communities the ability to moderate themselves at scale.

With that in mind, it may be worth considering Reddit. Each subreddit has it's own moderation team from the community that is charged with enforcing sitewide guidelines like not doxing people, etc.

This lets them penalize and ban at the community/subreddit level instead of trying to interact with individuals.

There are problems, of course, but with 20million people a month interacting with the site you're going to have problems no matter what (as the article explains in detail).


I think this misses why subreddits work. I have seen subs with very inactive mods and ill-defined subreddit rules still flourish.

This is because most content is self-policing by upvotes. Downvoted post dies in new so trolls know they can't reach a large audience. It's actually kind of amazing how quickly can newcomers can learn and adapt to the implicit rules and begin reinforcing it on others.

This does lead to problems when a smaller sub gets brigaded by a larger ones (or ends up on /r/all) and the original community can't outvote the outsiders.


Came here to say exactly the same thing. Look I know Reddit has its ups and downs and there are negative parts of it that I’m likely not experiencing, but I think they pretty much get it and it pretty much works.

The upvote/downvote system is fundamentally self-moderating (same thing as the comments and stories here on HN) and a light-to-medium touch by subreddit-specific moderators means that the good stuff generally rises to the top and the bad stuff generally falls to the bottom.

Is it perfect? By no means. But anecdotally, I find that I get a lot of positive and insightful content out of reddit, whereas Twitter just feels like the YouTube comment section of the world writ large.


I do think Reddit is particularly well suited to having good moderation by virtue of its structure, and Twitter is particularly poorly suited. Twitter is all about decontextualizing what people are saying.


I think a useful somewhat-apples-to-apples comparison for moderation is: Quora and StackExchange. Both websites are intended to be Q&A platforms, but their approach is markedly different and results in different moderation issues.

Quora is heavily centralized - all content must be moderated by the company, there is little to no segregation of questions by dedicated community, and all questions are allowed. Individual users must be name-identified and, importantly, are the core ingredient for Quora - people come to read the content of specific people, not for quality of the content in general. Users have stake but no say in the evolution of the system, and are by design incentivised to not care about question or answer quality but instead about the people they engage with. Further, questions are considered to be "owned by everyone" - so users are free to alter questions indefinitely.

In contrast, StackExchange (not StackOverflow) has similar properties to Reddit - partitioned into smaller communities, reliant on user moderation, equipped with a somewhat reasonable appeals process that relies on a "meta" site where users can contribute to improving the site. Users have both stake and say, and importantly are dedicated to improving the average quality of answers rather than who is answering.

The difference in moderation is stark. Quora's issues with moderation have been legendary - anywhere from an inability to stop sockpuppet voting, pay-to-upvote rings, and reversing helpful edits to outright refusing to take action against documented sexual harassment by users against other users. StackExchange has had no such issues, or rather it has had vanishingly fewer issues compared to Quora.

While we obviously can't compare them on equal footing, given that the sites are so different, this is some circumstantial evidence that community moderation, allowing communities to manage their evolution, and removing the focus on individuals is a better experience between two sites that seek to accomplish the same goal.

I'm not sure what this would say about social networks - which are all about emphasis on other people - but it does seem like a great model for online communities that aren't social network focused.


"Cultural context" but also should include depth of knowledge context (domain expertise, language depth-understanding) - as not everyone's critical thinking is as developed as say "first responder" moderators, and why I believe in many, if not all cases, there should be a cascading process to get verification of actions from a "higher up" - along with an appeal process.

Subreddits allow this to some degree, as you can create two "science" communities with different names, and have different rules held for each. It also somewhat protects against "moderator capture" - though a better job could be done allowing for more fluidity and mobility here; community moderation, trust building, isn't often the leading metric of the founders of large platforms - with Reddit creating fake users to post content to make it look like the site is busy, as one example, and I imagine they did the same once they implemented the ability to "give gold" - giving gold automatically to comments to make others adopt it faster thinking it was a new cultural norm.

A problem I've personally experienced however is moderators of subreddits ultimately have each unchecked dictatorship-censorship powers - and it's clearly a common problem of moderators being on their high horse with no potential of repercussions because Reddit as a platform doesn't moderate the moderators, at least not adequately.

Moderation should really be called parenting, parenting which is the process of role modelling and explaining - to help people or children grow by deepening their understanding by guiding them to appropriate resources to help them understand different concepts, when necessary.


>With that in mind, it may be worth considering Reddit.

Or, you know, individual websites with forums that aren't tied into a giant superstructure and therefore is a constant target of efforts in manipulation and social engineering both from its owners and from third parties.


Moderation is indeed difficult, but automating moderation is a bar we need not even get to yet, when even policies that are relatively easy to enforce keep getting watered down.

Just today, for example, it was revealed that Facebook is no longer going to prohibit false factual claims in paid political advertisements -- even if those falsehoods are determined by generally-respected fact checkers. See, e.g., https://popular.info/p/facebook-says-trump-can-lie-in-his (and associated commentary https://news.ycombinator.com/item?id=21152869).


The problem is, "respected fact checkers" are anything but. The HN discussion you linked to actually mentions some interesting things that these fact checkers omitted.

Factcheck.org claims: "there is no evidence that Hunter Biden was ever under investigation or that his father pressured Ukraine to fire Shokin on his behalf."

However, the fired Ukranian prosecutor testified in court that "the truth is that I was forced out because I was leading a wide-ranging corruption probe into Burisma Holdings, a natural gas firm active in Ukraine and Joe Biden’s son, Hunter Biden, was a member of the Board of Directors." [1]

Testimony taken under penalty of perjury is evidence, and factcheck.org was wrong to claim that there is no evidence.

[1] https://thehill.com/opinion/campaign/463307-solomon-these-on...


Both are still true. The company was under investigation not the individual. The US senate also signed a letter in bipartisan agreement that Shokin wasn’t pursuing cases like Burisma hard enough.


I agree that there might be evidence, and Factcheck.org may have gotten that wrong. But perfection isn't the bar we're aiming for, nor is it practically attainable - "usually correct" is reasonable.

Moreover, let's not confuse testimonial evidence with fact. They are not identical. In our judicial system, the jury is the finder of fact; they weigh the evidence to make conclusions.

We also can't let a minor technical error in a tangential report invalidate the rest of what otherwise is a lie -- specifically the primary bold claim in the example advertisement that's at the heart of the article.


"usually correct" is what the general media is, and they all usually have an agenda. Whenever they get it wrong, the process of commenting itself is a fact-checking exercise that s usually better than the "fact checkers" (case in point this thread). That nullifies the idea of factcheckers imho

I think the biggest issue with social media is: do not make truth a popularity contest


Correct except when it matters is worse than known sea of lies and misinformation.


These are good points. Moreover, they're unusually well made.


Biden openly bragged about getting Shokin fired, though not under oath, and he stopped short of saying it was to protect his son.

https://youtu.be/Q0_AqpdwqK4?t=51m49s

edit: phone brain skipped over the "on his behalf" part of the sentence, I thought that they'd said there was no evidence Biden even pressured the government to fire Shokin.


The entire us government was pressuring Ukraine to change prosecutors. The idea that Joe Biden came up with this idea on his own is just lazy gobbling up of propaganda.


Biden didn't mind taking the credit for getting it done.

"I said, I’m telling you, you’re not getting the billion dollars. I said, you’re not getting the billion. I’m going to be leaving here in, I think it was about six hours. I looked at them and said: I’m leaving in six hours. If the prosecutor is not fired, you’re not getting the money. Well, son of a bitch. (Laughter.) He got fired. And they put in place someone who was solid at the time."


Politician takes credit for something he didn’t do, news at 11. You’ve not shown anything.


Also, Biden didn't promise Ukraine that they were getting $1B if they fired the prosecutor, which is the false claim in the advertisement referenced by this article. He threatened they would not get $1B unless they did. In other words, it wasn't a sufficient condition; it was a necessary condition.

Surely those of us who participate on HN and code for a living can appreciate the logical difference:

    if (prosector_employed) {
      ukraine_bank_bal += 0;
    } else {
      if (other_conditions) {
         ukraine_bank_bal += 1_000_000_000_000;
      }
    }


I can’t believe people on HN, of all places, are peddling conspiracy theories by corrupting the facts.

Fact 1: Burisma was under investigation.

Fact 2: Hunter Biden was on Burisma’s board.

Fact 3: Hunter Biden himself was not under investigation. Not only that, there is no evidence that the investigation Burisma was under had any relevance to Hunter Biden.

edit: downvoters better explain themselves. If you have evidence to the contrary, let’s see it.


You may just be getting down voted because, correct or not, you're furthering a somewhat divisive thread that is becoming increasingly off-topic.


The endgame is a hierarchical structure, where leaders who have overall moderation and organization control abilities for "their system" - within a decentralized system that allows full mobility of users to switch to someone else's system as a failsafe - are highlighted and essentially ranked by the number of users "following" - trusting - their moderation guidelines, and trusting they are making sure they're enforcing them adequately, efficiently, and in an acceptable, timely manner.

E.g. The system of users you're part of - who their "leader" is or who is at the lead - acts as a filter setting creating a safe space of users, community, that will be, is being moderated in a way that aligns with you - that you agree with; do you want to be in a good, healthy parental environment or a bad parenting environment?

There could also be a Board of Directors or Advisory layer - so perhaps known people who take a stand for freedom of speech like Jordan Peterson would be on advisory boards, or even perhaps taking a lead role himself if he cared to take on that role - whatever that ends up entailing.


To me it is fallacy of bigtech to misclassify moderation problem as just a typical ML problem. Hence a false belief that ML models, standard approaches that they use for their other ML problems, and cheap annotators can solve it.

What, I think, can be done:

1. Don't just hire expensive PhDs and hope that algorithms can correctly classify racism, etc. Hire an expensive product visionary who can build holistic approach to address moderation problem and knows what is solvable with ML and what needs pivoting to human-guided resolution.

2. Don't hire lots of cheap annotators in "Rural Inida" but hire or train fewer expensive experts and give them powerful tools that can scale their work to handle efficiently all suspicious traffic.

3. Give power to "normal" users to flag inappropriate content or behavior and loop this in a thought-out workflow with your ML and your experts on the other end.

4. Partition users and content so that bad actors and bad content get clustered away and are less easily accessible for others.

Why bigtech companies don't do it? Besides fallacy of thinking this is a "typical" ML problem, moderation is hard. It's also hard to make a business case for short-minded bosses and prove revenue increase or expenses cut. Lastly, some bigtech ones benefit a lot from abuse so they can't change immediately to stop it all.


I'm reading the new book on the failings of pure ML based AI by Gary Marcus and Ernie Davis (Rebooting AI) that makes a similar point


According to https://www.theverge.com/2019/6/26/18744264/something-awful-... the best mods are chosen by the community they're moderating. You see this on Reddit. The mods mostly complain about how limited the tools are; they certainly aren't using machine learning written by PhD's. But Reddit ends up being fairly well-moderated; it's the subreddits themselves that end up banned for various reasons.


Sure, but part of the goal of moderation is to enforce standards from outside the community upon a less-than-willing community.

For example, the community following and viewing youtube gun-shooting videos would say none of it need to be deleted or demonetised, whereas youtube themselves prefers to often demonetise such videos.


Reddit mods aren’t chosen by the community. Reddit isn’t even mentioned in the article.


Do you mean admins? Because mod teams are self-selecting once a subreddit is up and running and if you don’t like a subreddit you can always start your own. People don’t vote for mods but they participate in subreddits they like.

In Albert Hirschman’s typology it’s all exit, voice and loyalty are irrelevant.


This ignores network effects. Significant community forks are pretty rare in Reddit's history, and launching a new subreddit is not a trivial thing to do - that goes double since you can expect mods to suppress all mention of the new community since they have absolutely no constraints on their conduct.


it happens though. example is r/libertarian vs r/goldandblack . The new subreddits are smaller because they are usually populated with users "that care", and that's a good thing. It's not like they remain obscure too


Wasn't /r/goldandblack a split from /r/anarchocapitalism, not /r/libertarian?


Yes but now an alternative to both


Doesn't Reddit plant their own mods into important subs? As soon as a sub starts to regularly hit the frontpage it is forced to only accept advertiser friendly content or get blacklisted/banned.


A subreddit could choose its own mods, it's up to the original mod abdicating.


Moderating a forum of a few hundred recurring posters, who nominally all want to participate in discussions of a few specified topics is usually feasible.

Moderating open venues with no sense of commonality or community where anyone can just show up and say anything is ... much less feasible.


There are really two issues here, both of which make it difficult for these sites to moderate their 'communities':

1. Moderation doesn't scale. It can't be automated with an algorithm (well), it can't be done well by a bunch of low paid contractors with a handbook and it can't be done across a million user platform by about three people.

Sites like Twitter, Facebook, YouTube etc tried to get round this by using the above algorithms and contractors, and that's partly why they're so terrible at it.

2. Moderation is community specific, with each community having different standards for what's acceptable. For instance, a site like 4chan or 8chan may find nearly anything acceptable, whereas the official Disney or Lego site might only want stuff that's appropriate for a young audience.

That's why forums work so well, because the userbase and owners decide what's acceptable there, and those outside of said grouo can join/not join based on their preferences. Same with Reddit to some degree, though that is complicated by people being able to move between subreddits so easily and them being hosted in the same place.

Unfortunately, many of these larges don't have this. They have one free for all open space where hundreds of different communities are forced to interact with each other. They stick millions of people into a newsfeed/search/whatever that despise each other's views, have completely ideas about what's acceptable and hate every ounce of the other side.

That's a recipe for disaster. Putting those who fundamentally hate each other in the same area for hours per day leads to a toxic atmosphere, and creates a situation that's almost impossible to police well. It's why school and prison are so bad; the kids and prisoners just have too many fundamentally different, conflicting worldviews to get along.

The only way to fix this is to:

1. Accept that ommunities should be moderated by their members, not robots or contractors.

2. Split the community into smaller ones where people can set their own boundaries for acceptable behaviour.

3. Try and stop intercommunity trolling as best as possible


And yet US companies feel it appropriate to judge the rest of the world's conversations against the backlight of select US sensitivities and blind spots.


Moderation shouldn't be hired out. The community should provide the moderation.


My "truth" is not your "truth" is not someone else's "truth". None of us is wise enough to tell someone else that their truth is wrong, so expecting someone to write software that is would be asking a bit much.

Filtering out opinions the majority doesn't want to pay to hear is simpler and more lucrative anyway.


There are objective truths. It doesn't matter if you think the earth is flat. It's not. And I really wish this was a contrived example


Which truths are objective? Who decides that?

From the spectrum of Math -> Physics -> Chemistry -> Biology -> Psychology, where do you draw the line? This isn't an easy question.


As one aside on the flat Earth stuff, I think something that people do not really consider is scale. There's always going to be people that are a bit off the cusp, some intentionally and some not. So hoping for 100% on most of anything is never going to happen. So let's imagine 99.9% of Americans think the world is spherical, or oval, or whatever - not flat.

That is really really good. Yet, take that 0.1% to scale. In America alone that would mean that you have 330,000 people that think the Earth is flat. On Facebook scale (2.5 billion monthly average users) that'd be all the way up to 2.5 million people. And thanks to the internet people with 'eccentric' views are more capable than ever of joining up with other people with such views. And people with these views tend to be very strongly inclined to do just that, alongside also trying to evangelize their views so much as they can - because they think they know something other people do not.

This effect creates a deeply misleading image of the relative popularity of various views and values. Can you imagine a Facebook group with millions of people who genuinely think the Earth is flat, and many people popping their head up in various locations to advocate for such a view? You could have that, yet it be the case that 99.9% of people do not feel that way.

Scale is a hell of a thing. And given the increasing size of various platforms, you're always going to see what you perceive to be large numbers of people with rather peculiar views and values. It in no way suggests there are actually large numbers of people with these views and values. The social media paradox stands head and shoulders above that relatively petty birthday paradox.


The person who believes (or says they do) that the earth is flat, has every right to do so. I agree with you, they're objectively wrong. I find it arrogant to say they have no right to speak because of that, and will not.


Ok, but now replace flat earthers with people advocating and organizing meetings for people interested in joining ISIS.

Why do we want a world where platforms protect videos promoting and glorifying beheadings, exactly?


> protect videos promoting and glorifying beheadings

To play devil's advocate, "objectively" these beheadings are still happening somewhere in the world. It's an undeniable truth even though you don't want to see the evidence for it. For a lot of people this is their flat earth. No evidence means it's not happening.

Why not allow the content to be seen, let the world be horrified that there are people driven to become like this, and try to understand what led them to do those things?


ISIS and other violent groups based outside the US were indeed the catalyst for the first big wave of content removal from American platforms, as I recall it. I noticed it particularly during the Syrian Civil War. For the first few years of the war you could view combat footage on YouTube and Twitter without difficulty. Later you started to see such videos rapidly removed, particularly if they were filmed from the POV of anti-government (often some Islamist faction) forces.

At the time, conservative media outlets seemed predominantly critical that American platforms weren't censoring enough content. Even if that content wasn't actually illegal to publish under US law.

Here's a 2014 article from the conservative site WorldNetDaily, for example:

"ISIS-branded merchandise sold on Amazon, Facebook"

https://www.wnd.com/2014/09/isis-branded-merchandise-sold-on...

Quotes from the article:

Dave Gaubatz, author of “Muslim Mafia: Inside the Secret Underworld That’s Conspiring to Islamize America,” contends Facebook and other social media sites have a responsibility to shut down the forums used to market radical Islam and mobilize Islamic youth.

“I am a defender of the U.S. Constitution but the Constitution was not designed for enemies of America,” said Gaubatz, a former U.S. Air Force investigator.

“We must stop jihad online because these groups target our children through social media,” he told Fox News.

...

“ISIS is more than likely not producing shirts but indirectly they are benefiting from it,” Scotty Neil, a former Green Beret who founded Operator, a clothing company geared toward special ops soldiers, told Fox News. “I don’t think that T-shirt company X is sending the Islamic State funds, but people wearing these shirts are making an outward statement and that often starts a dialogue and debate that furthers their message.”


Advocating to join a terrorist organization, or providing support for such, is illegal. Believing or saying stupid things is generally not illegal. Equating the two is nonsensical both from an intuitive and legal stance.


Perhaps you could give some examples of conflicting 'truths'. It seems to me that in most cases the truth can usually be untangled. This may be an unpopular view, but I'm not willing to give up on the idea of objective truths just yet.


The title is silly, because no companies have unbounded resources. If they really did have unbounded resources, they probably would have better moderation. I have plenty of ideas that would work if I could only hire infinity software engineers to implement them.


Consider the audience as people who primarily experience this moderation, and have no idea what these sort of things actually cost. For them, the resources of a dominant social media company seem arbitrarily large. (Originally I titled it "huge resources", but it looks like HN has some filter that strips qualifiers like that and it looked silly as "companies with resources")


I'm becoming increasingly convinced digital information has inflicted on us the curse of not being able to forget. The online accumulation aids an eternalisation of our short-term memory that is overwhelming us individually and collectively.


Because it's not a high enough priority for them. As long as it's not significantly affecting their profits, they'll do the bare minimum moderation. Same idea goes for the weak cybersecurity companies seem to have. When it comes to their IP they'll spend the resources to protect it, but when it comes to their customers info leaking, they don't care.


It's like the difference between porn and art. You know it when you see it, but writing a rule for it is impossible.


You might not even know it if you see it! It might be dog whistles that you don't even understand until it's pointed out to you.


Check out old Mae West movies where she adheres to the letter of the Hayes Code yet always finds a saucy way to get her point across. A couple classics:

Reporter: "What do you think of foreign affairs?" West: "I'm all for 'em!"

"Is that a gun in your pocket or are you just happy to see me?"

Language is infinitely malleable. Algorithms don't stand a chance.


> Language is infinitely malleable

I mentioned this before but my GF worked on an mmo's for tweens. Said one experiment with filters ended with a 12 year old boy writing, I'm going to stick my giraffe up your pink bunny.


Any media the viewer loses interest in immediately after climaxing.


I'm not sure that distinguishing white supremacist symbols from the Circle Game[1] is even possible in theory just from a single post.

[1]https://www.dictionary.com/e/slang/circle-game/


Reading this made me realize that the internet needs to be decentralized as soon as possible.


Because their resources are not unbounded. On the level of moderation, they are poor.


Because they insist on making step one adoption of their fanciest model, on the off chance they can reframe the whole project of content moderation as an opportunity to refine marketable, proprietary tech?


Simple answer: because moderation doesn't move the business needle.


I would argue it absolutely does. When NYC started "moderating" Times Square a lot more people started coming and spending money. People pay money for all kinds of blocking, scanning and auditing of email content, web content, etc. I expect moderation, and when its not done well, I don't come back.


After reading the article, I would say it's because it is hard. Even if the business is aligned with the success of moderation, it is still a monumental task.


This article is purely theoretical and I don't think they do such brainstorming about how to train the IA. It's proved that they just take a brunch of poor peoples poorly paid to deal with real messages and try to moderate them sometimes with a poor understanding of the context, and use it as an input to teach the IA how to do "proper" moderation.

That's how figure-eight makes its money

https://cacm.acm.org/magazines/2013/8/166313-software-aims-t...


> It's proved that they just take a brunch of poor peoples poorly paid to deal with real messages and try to moderate them sometimes with a poor understanding of the context, and use it as an input to teach the IA how to do "proper" moderation.

That's exactly what the article says: "you’d write up some guidelines (...) and then contract with some external company to have human beings read those guidelines and rate lots of examples that you send them" and then it mentions people from places with low salaries, like rural India and the Philippines.

And then in the possible solutions, it mentions hiring people well steeped into the specific cultural context of those you're trying to moderate.

This is where I think the article undermines itself, because if paying for some different (and probably more expensive) set of people may be a solution, then the question that was supposed to be answered remains: "why do companies with unbounded resources not do that?"


While hiring those people might be more expensive, I expect it isn't much. Annotators willing to read a set of guidelines and perform rating tasks long term are generally not cheap. I suspect that for something like twitter, identifying people most suited to moderate a particular piece of content is itself a very difficult technical and social problem. So even assuming you can figure out how to find these people and hire them, there's still a long path from there to improving moderation.


because Free.

If we hadn't invented free, we'd have moderation because being repeatedly abusive and troll live for money, is in the end self defeating.

If you want moderation enough, get rid of free.


If it is not free, it is a walled garden.

Maybe the optimism people had of the internet being an all inclusive community was ill fated.

Exclusivity is may be the next fad, driven by the trolling of few in an open community.


Because it would require omniscient AI for a task that even humans fail at doing it perfectly?

Does anybody really expect automated moderation to work even remotely decent?


I think there are a few ways to better deploy moderation.

With twitter and its shouty kin, splitting the userbase into "normies" and "broadcaster" (the top 10% with the most views/reach)

Broadcasters have a disproportionate effect in terms of reach and tone. Spending time (and people) closely moderating in a very visible and transparent way will set the tone for everyone.

Twitter is hamstrung because it's most famous user (Trump) breaks its guidelines monthly (I mean he literally threatened to bomb iran, if thats not violent and intimidating behaviour I don't know what is)

Also, controversy drives engagement, which drives revenue. So I would say that its a feature of twitter, not a bug.


> he literally threatened to bomb iran,

When people talk about keeping violence off social networks, they mean keeping personal violent threats off social networks. Whether to make war or peace is a political question, one that's perfectly fair to discuss in public. You don't get to impose pacifist foreign policy on the president by conflating random threats of personal violence with the legitimate business of statecraft.


You are interpreting this as a political statement.

Threatening to bomb a country is threatening violence, this is against the terms of service: https://help.twitter.com/en/rules-and-policies/twitter-rules to quote:

> you may not threaten violence against an individual or a group of people

Like the law, it either applies equally to everyone, or its does not apply.

> You don't get to impose pacifist foreign policy on the president by conflating random threats of personal violence with the legitimate business of statecraft.

Again, you are reading way more into this. Either the Terms of service apply to everyone, or they do not apply.

According to those terms of service, there is no different between me threatening to kill a group of people, and trump doing the same.

As I said, transparency and consistency is the only way it can be implemented effectively.


Bombing another country is a legitimate act of the state. It's no more an endorsement of "violence" than wishing for the arrest of a notorious serial killer is an act of "kidnapping". Conflating legitimate acts of the state with extralegal personal criminality is a political statement.

If a site's terms of service prohibit endorsing legitimate acts of the state, that TOS is a political statement.


> Bombing another country is a legitimate act of the state

yes, it can be.

> It's no more an endorsement of "violence"

War _is_ violence.

> conflating legitimate acts of the state with extralegal personal criminality

Well, you're saying that I equate the president's tweets as his personal threats. That's orthogonal to extralegal. It is also irrelevant, for the reasons I'm about to lay out:

The Terms of Service are the basis of a contract for using the site[1]. Unless there is a specific clause for "legitimate acts of state" then its a binary choice. Choosing not to implement them equally is a political choice.

look, its perfectly possibly to not agree with a person, but insist that legal/procedural process is adhered to. The president is a person like any other, and subject to the same legal/procedural process as the rest of us.

[1] well usually.


> Bombing another country is a legitimate act of the state.

You're wading into deep waters here. What makes a state's actions "legitimate"? There are a lot of theories on this.

One of these theories says that a state's legitimacy and sovereignty rests in the will of the people it governs. For example, the U.S declaration of independence states in it's first paragraph "Governments ... deriv[e] their just powers from the consent of the governed". Accordingly, if a state does a thing, and enough people ruled under that state believe it's illegitimate, then it is indeed illegitimate.

Under this principle, criticizing the actions of those that govern you is not unpatriotic or "a political statement" as you say, it's an integral part of government itself.


What also makes moderation hard is that it can be abused. For example, anyone who is influential and persuasive enough can be labelled a Nazi by political opponents and targeted campaigns can be run to get them deplatformed


it is pretty clear from actual postings whether or not someone is anti-immigrant and/or white supremacist


Because moderation doesn't scale.

Case in point - youtube. They changed their demonitization algo in response to complaints and now they have complaints from other users that the algo is producing false positives. For example, the "Great War" history podcast had 100s of youtube videos demonitized because they talk about storm troopers and Nazis, which of course isn't great if you are a Nazi, but maybe ok if you're a history podcast.

The issue is that each video generates on an individual level so little money for youtube that it just doesn't pay to have an individual go through each and every video on their platform and make a value judgement on worth. I mean anyone can upload a video for free and each watch of a video is maybe at maximum a few cents from an advertiser in profit to youtube.

Facebook and all other content aggregation web forums face the same conundrum. And the problem doesn't get better if we invent new algos or people more efficiently (somehow). On the one hand you now have an algo that determines what can be said or not said en masse and is making the decisions in an increasingly black box (as the heuristics will be by necessity rather complicated), on the other you have the very poor filtering videos or at the very least fallible human judgement still at the wheel on what gets to be said and not said.

Who watches the watchers? At the moment we have a handful of powerful media conglomerates that effectively control through monopsony almost all the social media on the internet. What, we've got Alibaba, Facebook, youtube, reddit, and instagram - that covers at a rough guess like 60-70% of social media traffic? But these are companies that are only successful at scale - you go on a social media platform because all your friends are on it (a la the myspace model) and you are given a "free" user experience because aggregating millions of people allows for a few cents in ad revenue per user to subsidize the servers at scale.

This problem is probably baked into the cake of this particular pattern and isn't fixable. Boy is it going to be wild when we finally crack how to make deep fake videos!


[flagged]


Yet there are some pretty good technical solutions that can be trained to recognize content of said shootings after the fact. Something Facebook clearly chose not to utilize following what happened in Christchurch.


Why not ban both? I can't see the social utility in allowing either in a private forum.


[flagged]


I'm not convinced fascism really exists in the modern world anymore. I tried searching for self-described fascists on the internet, and I encountered one mans geocity page that claimed he was running for US President in 2008. He admired Mussolini and Saddam Hussein, and had roman inspired costumes on. I checked the results of that election and it turns out no one voted for him, so perhaps even he was a hoax.

Ethno-nationalism is definitely thing. But if that's the same thing as fascism, then the entire western front of WWII was a huge inter-fascist struggle.

So what do you think fascism is, and where do you think it exists? Could you point me to a website of fascists?

EDIT: Thinking about this further, if fascism = ethno-nationalism to you, or even ethno-nationalism + a dictatorship, then Mainland China is arguably a fascist state. How many hundreds of millions of them do you think should be shot? That's a lot of bullets.


Now that everyone and the grandma is a “nazi”, is that truly what you believe?

Apparently I’m a nazi, because I believe in liberty. Are you advocating this for me?


He should at least have the good manners to tell us who he's going to murder.


A miracle cure that actually works against all political ideologies. Or indeed any idea.


And according to Xi Jinping, you're probably a fascist.


You don't need to moderate your social media to death. Intensive moderation is used to control what your users are talking about. Just let the users talk and say whatever they want. I'm tired of these nerds getting people banned when they say something mean online.


The title of this article literally reads "Why do those with unlimited power not take steps to limit their power?".

Also, name ONE company with INFINITE resources.


Are you being sarcastic or do you think the authors and publishers were actually saying that omnipotent beings exist?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: