Hacker News new | past | comments | ask | show | jobs | submit login

> Facebook does not intend to suppress the posts itself. Instead, it would offer the software to enable a third party — in this case, most likely a partner Chinese company — to monitor popular stories and topics that bubble up as users share them across the social network, the people said. Facebook’s partner would then have full control to decide whether those posts should show up in users’ feeds.

This looks like a shadowban at scale. So you share something - potentially putting yourself at risk to do so - but with the hope that others will see it. The "local partner" decides that it doesn't show up in anyone's feeds but still knows you posted it, so now you're a target.

Damn, that's evil.




The description of how this tool is going to work is directly at conflict with Zuckerberg's quote:

> “It’s better for Facebook to be a part of enabling conversation, even if it’s not yet the full conversation,” Mr. Zuckerberg said, according to employees.

It's not a conversation anymore if you are left blindfolded, speaking to a room you don't know is empty or not. Either incredibly dishonest or incredibly naive.


> It's not a conversation anymore if you are left blindfolded, speaking to a room you don't know is empty or not.

That applies your entire facebook activity, you've no idea who if anyone actually sees anything you post?


What? I can be reasonably certain that when somebody reacts to or comments on something I have posted to FB that they have seen it.


Sure, but until that happens you've got no idea if Facebook ever showed it to anyone (whether it was shadowbanned by evil foreigners, or they just didn't deign to stick it on anyones news feed).


anything for getting more users and their personal data to sell


If you get a comment, the "room" wasn't/isn't empty.

Unless...


Shadow-banning / hell-banning happens on HN too, but you're right that it's not a very nice way of moderating. Obviously, it's worse if it also gets you in trouble with the state. Manipulating people's view of the world is pretty dishonest in any case though.

I'm sure China would love to hide posts that mention 'Fatty Kim the Third' [0]. If you control the discourse then you can control what people think. We've seen this in both the UK and US recently. Russia have an agency dedicated to false information with sockpuppet accounts on social media [1] and the US has done related activities [2]. I'm sure other countries and interest groups have similar institutions. If you have an official integrated tool to do this work then it becomes so much easier. Worrying times.

[0]: http://www.reuters.com/article/us-china-northkorea-internet-...

[1]: https://www.theguardian.com/world/2015/apr/02/putin-kremlin-...

[2]: https://www.theguardian.com/technology/2011/mar/17/us-spy-op...


> Shadow-banning / hell-banning happens on HN too, but you're right that it's not a very nice way of moderating.

It's literally the worst way of moderating (short of serving the banned user destructive/malicious code). HN is wrong when they do it, too.

Fortunately we got some new mods a while back, they're very good, and nowhere near as trigger-happy with the hell-bans as before. And still, it's wrong, vindictive and dishonest, and if you're clever just as easily circumvented as regular bans, which would be the right option (otherwise I'd like to hear the argument why non-clever people should be punished more harshly).


Should an online community or its hosts have a say in shaping their community? If not, why not? If so, how should that be done? Why is shadow/hell banning in particular bad? I haven't seen a lot of discussions with reasons: just that it's bad. If you know of some, feel free to provide a link rather than duplicate it here.


It doesn't punish people intentionally trolling because they know to just create a new account. It just punishes people with unpopular opinions who are honestly trying to participate in the community.


It = hell/shadowbanning?

From what I've seen on HN, those that have been banned have been warned, and it's been for either incivility or posting mostly unsubstantive/inflammatory posts. Which seems reasonable to me. Have you seen something different? Or have a different take?


Yes, an account with thousands of karma can be permanently shadowbanned by triggering a mod. The problem is that the line between unpopular opinion and inflammatory can be completely subjective in many cases.

This site implies that comments should be analyzed rationally, but comments from shadowbanned accounts do not get this treatment. It's the equivalent of claiming someone will be allowed to take place in a televised event but then secretly cutting all of their responses out of the broadcast.

Shadow banning is anti-intellectual behavior because it's essentially a systematic ad hominem.


Agreed that the line between unpopular and inflammatory can be a matter of perspective.

Regarding shadowbanned accounts not getting the same treatment: are you saying they weren't warned beforehand? Or not treated fairly before the banning? How long should those who have contributed negatively (wrt community standards) tolerated? Those who keep showdead and have enough karma can and do vouch for substantive and civil comments from shadowbanned accounts. Btw, are there particular instances you have in mind that you can point to? I've been following sctb and dang for the past month or so to see how they interact with the site and don't recall seeing anything grievious about their banning decisions.

I completely agree that there's some tension between an policing a community forum and perceptions of anti- intellectual behavior, which motivated the additional questions in my previous comment. Would you mind weighing in on those? (And I'm not saying shadowbanning is the only tool they can use. Perhaps only voting and flagging should be used? I'm not sure if that's enough.)

Thanks! (Please pardon any lack of proper editing or phrasing. I'm on my phone and having issues with commenting.)


>Regarding shadowbanned accounts not getting the same treatment: are you saying they weren't warned beforehand? Or not treated fairly before the banning?

Their comments are not allowed to be judged based on the merit of their content. They are pre-filtered by a moderator's decision that anything generated by that person is not fit for human conversation. Every other account, including brand new ones, can create a comment and have it seen by everyone.

They may have been warned, etc, but I'm saying the shadowban itself is garbage, not the conditions under which it is administered.

>How long should those who have contributed negatively (wrt community standards) tolerated?

Downvote/flag individual negative comments, that's what we have the system for. If someone truly wants to contribute negatively, a shadowban is a joke because they can create new accounts.


As I understand it and have observed, users are shadowbanned because they've shown that they sometimes act in ways that are detrimental to the community as a whole. By continuing to act that way, they've accepted that they might be banned. Users can also appeal the ban by contacting the mods by email.

I leave showdead on and have vouched for comments of banned users. I've also seen users continue to spew vitriol after they know they are banned. Is the type of user that would pointlessly continue to contribute negatively in this fashion have on balance a positive impact on the community? Arguing that they're behaving this way because they've been banned shows that they don't have the maturity to recognize that their behavior continues to reflect badly on them. It's not going to change the fact that they're banned.

You're right that if you're a determined troll, you can get around a shadowbox. From what I've seen, banning has been on balance effective to deal with uncivil behavior. Creating a new account does add some cost to the ban. Relying only on down votes puts the cost entirely on the community at large at no cost to the user behaving badly. Do you think it's fair to the community to bear the cost of the user's behavior?

Do you have any other ideas on how to effectively shape community behavior, preferably ones that move the cost from the community to the user behaving badly? I'm honestly interested, because I've seen enough complaints about banning that I'd like to hear about alternatives that would considered fair and still be effective.

It would be great if everyone would contribute in a positive, or at least civil, way. In larger online communities it's much easier for users who contribute negatively to destroy a community where positive users no longer want to participate. Each community has its own standards and cultures (and what's considered good and bad behavior), which is great! So far, I think it's pretty remarkable that HN has been able to maintain the community it has in comparison to other online communities of similar users.


>From what I've seen, banning has been on balance effective to deal with uncivil behavio

Based on what evidence? Are you able to see the IP addresses of new accounts to correlate them with shadowbanned accounts? All we can see as outsiders is that shadowbanned accounts mostly stop posting, but we can't tell how many immediately signed up for a new account and started trolling again.


All we can see as outsiders is that shadowbanned accounts mostly stop posting

You're right. And this itself is evidence that banning is effective. It's not effective because they can't create new accounts (which they can); it's effective because they generally stop posting.

In addition,

- I haven't seen users obviously thumbing their noses at the mods with respect to the ineffectiveness of banning;

- I haven't seen the mods using or discussing any other methods stronger than banning to combat trolls, which I think they would if banning wasn't effective.

On the flip side, what evidence is there that banning isn't effective?

I've been a little frustrated by your lack of answers to some pertinent questions I've asked:

- Do you have particular examples where the mods haven't been fair?

- Do you have any other ideas as to how to fairly and effectively shape community behavior?


Sounds like the same system Facebook is implementing in response to the fake "fake news" stories.


Too convenient and the term can be way subjective.

"fake news" for who? For the Chinese government?

For the company that denies that their new phone explodes?

This is getting very ugly


it's "fake" for whoever pays or can influence the most.


[flagged]


[flagged]


Beyond his somewhat hyperbolic list of "fake news" criteria, he's not wrong (although he's being a bit indelicate). Not sure why you would immediately assume his comment was in bad faith.


This sort of comment actually vindicates my stand. Whatever I said was sarcastic, perfectly civil and closer to facts yet (what I believe to be) a reasonable person like you downvoted, flagged and attributed motives.


Not really a shadow ban for fake news. They know as soon as they look at Analytics.


Can we be sure that we aren't already shadowbanned even in the western world? An AI driven news feed and timeline is practically an intelligent shadowban.

"Your post about your cat is not as revenue-relevant as a fake story about Clinton/Trump. Bam, you are banned!"


Regardless of how much traction our posts get, or don't, the thing we can be still be reasonably sure about in the US and most of the western world is that we won't be kidnapped, caged, tortured, or killed for whatever sociopolitical commentary we post on Facebook.

Facebook's collaboration with a regime where none of that is true is a disgusting low point, even for that company.


>> we can be still be reasonably sure ...

I would point to the word "still". If something can be used against you, it will sooner or later be used against you. I hope we won't live long enough to see this coming true.


That is some very extreme words you are using. I would caution you not to use those words lightly unless you are absolutely sure they are backed up with facts.


It's a simple and widely-known fact that China jails its dissidents.

This has been reported in literally every major newspaper in the world. The Chinese regime doesn't dispute it.


> kidnapped, caged, tortured, or killed

That is quite a different set of vocabulary compared to "jail"


Guantanamo fulfills the first 3 points and was considered as a "jail" in the western world.


Ok, sorry:

It's a simple and widely-known fact that China kidnaps and cages its dissidents.

Better?


> kidnapped, caged, tortured, or killed

> kidnaps and cages

Nope. Assuming what you asserted later are indeed facts, they only covers 50% of what you claimed in the original sentence. Still half of what you claimed is not grounded on facts.


Yes. I know many of my facebook friends in real life and see them on a daily basis. While it is possible to filter some of the things they say, in no way could they censor all of them, or completely make them up.


Facebook gives you the option of turning off the AI filtering and just getting a strictly time-ordered feed. (I tried it for a while but it was just less useful than the default feed).


Where is that "strictly time-ordered feed" option? I could not find anything like that in the Facebook settings.


Hmm. The settings have changed since I tried it. Maybe they've moved it, maybe they've removed it.


At least I can be sure that I am not already shadowbanned because fuck you facebook. I am not interested in an AI feeding my ego with confirmation and protecting it from confrontation.


That's so corrupt.

And it stems from the sick fact that our society still glorifies money/power as our most central cultural value.


Hellbanning has always been evil. When you interact with a public computer, there is a trust that is established. When you click a vote button, you expect a vote to be counted. When you press the elevator close-door button, you expect the door to close. When you make a public comment, you expect you are making a public comment.

To, well fuck with people in this manner is evil -- I do not care what your reasons are, how many people are on your site, the incidence of mental illness in the general population, and so on. There are other remedies aside from using technology to lie to and manipulate people.

If it hasn't become clear by now, it should be: the major internet players are interested in only one thing: to keep being the major internet players. That doesn't make them bad or evil -- sometimes good people do evil things. It makes the rest of us realize we have a rather immediate obligation: fix this or be conquered by it.


I can't tell if this is sarcasm, so at the risk of looking foolish, I'll just say, pressing the door close button on an elevator almost always does nothing. In most elevators it's not even connected to anything behind the panel. It's there to make you feel like you can effect the situation without letting you do so, which I think is the exact opposite of your point.


Where do you live? In my country, the close button almost always closes the elevator doors faster. Measurably faster, not a subjective milliseconds faster, but full seconds faster (so noticeable, in fact, that it drives me nuts when other people don't push it and also involuntarily block me from doing so). When the button doesn't work, it's acknowledged to be broken.


I've heard this before, but I don't buy it. Either that or the elevator at my workplace is VERY prescient.


The 'beg buttons' that pedestrians push to get a favorable traffic signal is frequently not hooked up.


For pedestrians: In my city I have yet to find a bogus one. They are there to control traffic flow. You press it, you get a green light. You don't, you can die waiting.

It's there to allow the priority traffic to be interrupted only if necessary.

I believe I heard something like "press a button for a faster signal", but that doesn't make any sense at all.

Municipalities want to control the traffic in the most efficient way. Not to give any troll a button to break things.

However there are several places where there are e.g. sensors to allow for a favourable signal for drivers. But then again that is mostly in places where they would wait for a green light for a long time.

The theory and practice in the transportation field is quite interesting.


That assumes the third-party is provided with the "metadata" (i.e. who posted it and to whom/where) rather than just the content of the post.


Maybe it won't send author metadata? So you just have a post, and the chinese partner decides whether or not it shows up inside China?


> Damn, that's evil.

Yes it is evil, but it is Chinese government that is responsible for that, not Facebook.

Realistically only some of the most popular messages would be affected.

It would not suppress political discussions at small scale (among several friends).

It would not suppress popular messages that are not political.

I support Mark on that: it's better to have limited Facebook in China than not to have Facebook in China at all.


Yes it is evil, but it is Chinese government that is responsible for that, not Facebook.

Said every collaborator, active or passive, in every large-scale injustice or abuse since the beginning of time.


> I support Mark on that: it's better to have limited Facebook in China than not to have Facebook in China at all.

Better for whom?

For humanity in general, don't you fear giving people the illusion of a free platform for speaking is worst than not faking anything at all and thus encourage truly free platforms to emerge out of people's frustration?

And for Facebook's own interests, don't you fear Facebook applying even more censoring in some parts of the world is going to affect its image world-wide?


ah, "messages that are not political".

What's to say basic journalism isn't political? What about highlighting a massacre (inadvertently)? What about random abuses of power where the perpetrator happens to be related to powerful elite?

Hint: you don't know. I'm not sure I'd use it for my distant relatives who live in China.


Sad to see this mostly objective and balanced comment down voted while all the hyperbole is going to the top.


Is there any information on how things are handled by Weibo and such ?


> Oh, like Hacker News if you're a conservative.

@jrcii:

There is something to it. When I saw your comment it was 4 min old and already [dead]. Just like most of your other comments, no matter what topic they touch.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: