Hacker News new | past | comments | ask | show | jobs | submit login
Facebook bans Holocaust film for violating race policy (rollingstone.com)
345 points by pr0zac on Sept 16, 2022 | hide | past | favorite | 368 comments



Here we are, deep down in the dystopia of Automatic Content Classification by robots.

Far less important of an example, but I was just put in "Facebook jail" for 24 hours for posting a picture of my son at the beach in his bathing suit with no shirt. Y'know, as one does at the beach. I can only assume it's because my son has long hair and the Convolutional Neural Network or whatever decided he was a girl and therefore I'm a pervert.

Sadly it was the "appeal" I submitted that got me blocked. Presumably by a "human", but who knows.

Before I appealed they were simply going to not show the picture. Appealing got me in "trouble." That might be even worse than the original misclassification. On top of that, if I was actually a "community standards" violator who posted potential child exploitation imagery, I'm not sure how a 24-hour ban on activity on Facebook is of any use, either? Except I'm terrified to imagine a world where Meta might have called police on me based on the output of a neural network image classification.

Others have said but I'll say it again: This kind of business doesn't scale ethically. You can't have billions of people on a bulletin board. It doesn't work. Moderation is essential to modern communication. But you can't do moderation automatically or at scale and in a universal way.

Very dark patterns emerge the moment you go FAANG scale and toss algorithms tuning for advertising and "engagement" into the mix, and attempt to do so with the help of computers.

"Sad" as it is, we will need to "retreat" back into smaller forums and BBSs where communities self-police.

Facebook has infiltrated so many aspects of society. Want to interact with the parents from the local school your kids go to? You have to do that on Facebook. Event announcements? Keeping in touch with your distant aunt? Facebook.

If something like Facebook is really a universal utility, it will have to be put under public administration; like the post office. But clearly, that isn't going to happen and would have other problems.

I am going to have to find some other way to engage with old friends and family.

Frank Herbert had some intuition with his whole "Butlerian Jihad" thing.


If you are only using Facebook for its original purpose of keeping up with old friends and family, then it doesn't actually have the scaling problem: if one of your friends or someone in your family starts posting a ton of racist bullshit you either confront them about it or drop them as a friend (preferably bilaterally), and either result is actually better for society than having Facebook attempt to present a skewed view of them that tries to just pretend they aren't posting all of the horrible stuff in the first place (whether by blocking it from being posted, quickly removing it once posted, or running some complex ranking algorithm that does a good job of hiding it).

It is only when you start having strangers talking to strangers that you run into a need for moderation, and even there you should be able to scale by sharding and punting the problem to others: if you have a group--similar to a real-world club--the moderation is on you, as the issues in your community shouldn't leak to people who haven't joined your community (and if people leave your community because you fail, all the better). The only real issue is that Facebook wants to--for the increased engagement, and thereby ad revenue--run a ton of recommendation algorithms that shove content from people you have no affiliation to in your face constantly (which one might notice should already be considered antithetical to the design of a social network) which leads to a ton of stranger-to-stranger interactions that are entirely "on Facebook" to ensure are clean.


And yet here I was, posting a picture of my kids at the beach so my mom and friends could see them, and I ended up banned from participating on Facebook for 24 hours and accused of violating community decency.

They're f*cked.


Yeah... so I honestly consider this to be a separate (also horrible, for avoidance of doubt) problem than "moderation"? Like, Apple and Google aren't exactly having a moderation problem when it comes to attempts to curtail CSAM stored on their photos platforms (which has led to their automated flagging systems and then need to scale to the entire world of users appeals and escalation, and then the subsequent concern about "are they going to call the police on me when I fail my appeal?!")... it is more of an attempt to deal with awkward regulatory "think of the children" overreach and hostile American law enforcement. Facebook seems to me to be doing the same thing here and failing.

Imagine a world where social networks were built by people who simply didn't care about "engagement" at all and weren't being motivated by ad revenue... I think you could design an end-to-end encrypted version of the system where no-one except your friends--and certainly not the network operators--even knew what you were posting in the first place and they would hopefully be able to avoid installing client-side filters for CSAM (but, with stuff like FOSTA and SESTA, maybe not?). This model should even work, I'd think, for Twitter/Instagram-like broadcast models (though the legal implications of the well-known "secret" key being published and accessible to the network might lead to various problems; you might have to go fully-decentralized).


You don't need to worry, Google's already doing it:

And destroying your digital life if you've put too much stuff in their care. With the characteristic Google customer service of "go fuck yourself".

https://archive.ph/W41mf


> using Facebook for its original purpose of keeping up with old friends and family

Not that Facebook lets you do that any more. I was off of FB for a while and recently rejoined because my community and my kid's schools only use FB for communications now. I'm connected to a very small circle of actual friends and family, but I still get daily political memes in my feed (politics I vehemently disagree with as well) no matter how many times I try to block them.


The biggest spam I get from Facebook ads itself. I live in Switzerland and I am BOMBARDED with financial frauds, scams and ponzi like schemes served directly from their ads. I checked why in their system and got answer "primary location: Switzerland and male 25 - 35 years old". I tried to report, block them all but facebook support always if replied at all, it was saying all in line with their policies. So, I deactivated facebook account and now using only messanger


Same. Not Switzerland but those fraudulent ads made me delete my account for the last time now.

Facebook is a hostile platform.


Yeah sorry, I meant something more like "if you look at the goal of the original use case of Facebook" not "if you as a user simply use some subset of the website". The greed to maintain and grow the valuation of a publicly-traded company--and thereby to optimize the entire thing for explicitly only maximal revenue (and thereby maximal engagement)--has so universally destroyed the dream of social networking that we simply don't actually have a large social network anymore.


Facebook was originally gated to just university students from specific universities, then they started to open it to everyone. It was basically a university bulletin board.


Yeah: I am actually old enough to have been waiting for Facebook to be supported at my University ;P. I remember it already spawning with the same functionality of, say, Friendster, and so it already was based on the "social network" concept of people having "walls" they posted to and the concept of "friends" that you were thereby keeping up with... which frankly isn't anything like a "bulletin board" (virtual or physical) and was much more like people posting stuff on the door of their dorm. It could be that it was a bit different, though, in the first handful of institutions before it got around to supporting mine.


You’re right, it was more like those small whiteboards everyone had on their door at my university. I remember having poke wars, too.


We don’t just need human moderation, we need due process. These companies control too much of our digital lives and make so much money from us, but have zero regard for us as soon as it’s inconvenient to investigate a case, because that would cut into their insane margins. I was permabanned for Twitter recently merely for getting the attention of some large influencers who disagreed with me (no actual rule was broken, but I received enough reports that my account was nuked).

I appealed multiple times but Twitter’s appeal process is a sham for us little people. I doubt any humans ever looked at my account.


They only control what we voluntarily give them. Social media controls nothing of importance in my life.


Emphasis 'we', not 'my'. These guys are already making shadow profiles out of info given by friends, corporations, etc. Not participating is also causing red flags in certain circles. Withstanding peer pressure is one thing, having your identity made up or flagged out of your control is another.

This is a slippery slope that should be tackled before it gets to that. The only people not affected indirectly, are the people who will die without children or younger cohorts as friends.


> These guys are already making shadow profiles out of info given by friends, corporations, etc.

True, it's a shady practice indeed, but signing up and giving them even more information directly from the source is obviously far worse than the fraction of signals they can extract from your friends.

> Not participating is also causing red flags in certain circles.

There's no accounting for the peculiarities of social groups, you could say the same thing about refusing to smoke weed or drink alcohol, it doesn't mean those vices are vital, and social media is no different.


I don't think you fully grasp what the slope is sliding to. There are enough token anecdotes of companies doing background checks on social media, and will actively flag individuals for having zero presence. We also have social credit score horror stories.

Your answer doesn't work anymore when lack of participation is considered wrong. We should be blocking that instead of assuming things will just work out forever as long as individuals guard their identity. Again, this goes beyond just standing up against peer pressure.


> There are enough token anecdotes of companies doing background checks on social media, and will actively flag individuals for having zero presence.

I'm sure it happens, but I don't believe that to be a real issue since using the presence of a social media account as a filtering tool for hiring is obviously ridiculous, and as someone who has done a lot of hiring, it's completely absurd to imagine we'd ever turn away a good candidate because their name didn't hit on a social media search, especially because it's very common for people to use nicknames or false names on social media or to completely remove their account from search altogether.

I also don't see the peer pressure thing as an issue. Adults don't meaningfully peer pressure other adults to use social media, nobody cares, and kids will peer pressure for everything from video games to sex and drugs, but it's pretty obvious that being peer pressured to do drugs isn't a valid reason to use drugs.


I'm not sure why you circle back to peer pressure when we agree it isn't an issue. Are you reading past the comments?

>it's completely absurd to imagine we'd ever turn away a good candidate because their name didn't hit on a social media search,

Understand for a moment many of these people are not developers with well-established CVs. These are normal people working the bottom of the ladder where there are plenty of replacements, and the answer to being irreplaceable is effectively 'start becoming a prodigy, establish a network early or be lucky'. Often too late for them. Even that advice alone is insane for the yet-to-be-born given a virtually global mental health crisis.

Leaving things up to executives behaving in a sane manner has given us multiple global problems to deal with. I wouldn't count on their sanity to prevent another.


> I'm not sure why you circle back to peer pressure when we agree it isn't an issue.

You're the one bringing it up. You've mentioned peer pressure in all of your replies.

> Understand for a moment many of these people are not developers with well-established CVs

It doesn't matter the industry or the CV, the idea that the absence of a social media account factors into hiring decisions in any real way doesn't make sense.


I think there was a remote possibility of it being an issue 6 or 7 years ago. Since then the problems with Social Media. People preferring not to participate, has become such a trend that it is extremely unlikely to be a hiring issue these days.

However, in *some* corners of corporate HR, it's certainly been part of their scoring, at the peak of Facebook's popularity.


You haven't thought about peer pressure enough. It's got depths.

"You must do this thing" evolves into "you must support this thing" and then into "absence of support is equivalent to nonsupport (antisupport... whatever)".

And in social media this evolution is fast.


and this is where people must

1: reach their arms around and feel in the middle of the back 2: arch a bit back and fourth 3: realize they have a spine 4: say "i dont give a shit if you are in facebook or twatter or whatever, I am not. Deal with it."


I hate social media, and use it as little as possible, but can't figure out how to do what you are saying here in practice without total social isolation in the real world outside of social media.

For example, there are several sports I participate in (physically, in real life) but these are organized on either Instagram or Facebook. I have created accounts solely to access this information (date/time of events). Facebook and Instagram are constantly disabling and blocking my accounts, apparently because my low engagement (zero posts, only "lurking") triggers some sort of bot detector algorithm. I have no recourse, and can't contact anyone at Meta about this.

I've tried getting these communities to inform me outside of facebook/instagram, but it's too big of an ask. These mediums work for everyone else except me, and the people involved lack the tech savvy or interest in trying to find an alternative.


You are being punished for not "engaging" enough. Wow. I mean I imagined. So it really is a thing.

Like that black mirror episode where you aren't allowed to close your eyes when there's a commercial on.


Unless you don't interact with anyone, that's a completely untenable position due to network effects. eg My family (spread around the world) uses WhatsApp for communicating with each other, which I'm not much of a fan of but it's pretty much impossible to get them over to another platform given that I don't live anywhere near them anymore to teach them and they have to use it anyway for communicating within their residential community etc.

Sure technically I'm not being coerced into using WhatsApp, but it isn't exactly reasonable to say that if I really cared I would just not talk to my family until they figure out how to use a platform I prefer.


WhatsApp isn't social media, it's a messaging app, but more critically it's based on phone numbers, so WhatsApp really has no control over access to your contacts.

> Unless you don't interact with anyone, that's a completely untenable position due to network effects

It doesn't have to be this way though. Between sms, email, telegram, signal, and discord I have communication channels to every person I actually care about, and it's trivial to bridge additional layers of communication if needed.

> it's pretty much impossible to get them over to another platform

I hear where you're coming from, but in my view this is an intentionally defeatist attitude. We're throwing up our hands and saying "social media owns us and there's nothing we can do, it's just too hard to install another app". In reality, if it's important, it's not that hard. There's no disputing that social media is convenient, but it isn't vital.


I am aware that WhatsApp isn't social media, it's just an example of an app I would ideally like to switch away from.

I'm not really sure how it's defeatist when trying even to only get myself off the app would require making things much harder for my relatively tech illiterate parents on the other side of the world with no tech literate relative to lean on to help them out. With Whatsapp they've used it for a few years and can easily get help from any young neighbor in case of issues.

It isn't like I'm not trying. For example, for communicating with some close fairly tech literate friends, we go through a relatively big effort to host and maintain our ideal of a self-hosted Matrix and Misskey node. But there we can manage it due to everyone in the group being able to at least describe the errors they run into.


I haven't used Facebook or Twitter in well over a decade.

Nothing is more freeing than not having to put up with entitled, whiny idiots who think they are the moral authority on pretty much everything in the world. Especially since most of them couldn't accurately point to country that isn't America on a map.


Yet, it has a huge influence on spreading certain ideologies. The effect does not have to be direct.


These companies control too much of your digital life. I do just fine without having a Facebook, twitter, instagram, gmail, etc…

The only company which truly has me by the balls is Apple. There isn’t much getting around it however because you are either going to get boned by Google or boned by Apple, and contrary to social media accounts I think that cell phones are essential.


It's similar on Reddit and, yes, even HN. The mere fact that you were "flagged" (by one of your peers) means that you are, to a significant degree, flagworthy and thus justifiably treated like a criminal.

Maybe these people doing the flagging are special people who have proven their worth. I dunno.


> Before I appealed they were simply going to not show the picture.

I wouldn't be surprised if Facebook didn't just "not show the picture" and they actually forwarded your info to police or some three letter agency. You're probably flagged already. Probably being watched more closely as a result of your "perversion" and clearing up the matter with Facebook may not matter to anyone else they shared that particular data point with.

I agree you should get the hell away from facebook, but this kind of thing won't stop there. Apple wants to scan your personal files and so does Microsoft (last I checked Windows 10 already records and sends to MS the filename of every image you open in their "Photos" app and how long you spent looking at it). When your OS is acting against you, or even your cameras there won't be anything you can really stop using.


I sometimes wonder what would happen (it won't) if some fed-up Congressperson drafted a bill that would allow folks to sue Facebook, etc. for libel when it accused you of being a ped0, criminal, or other malcontent. Perhaps they would be a bit less happy to accuse their users of vile things.



More in the way of a specific law. I don't think a full Sec 230 repeal is a good idea. I really, really want services to have to specify what you did wrong, and you should have some actual options to deal with these services. They have become utilities and need to have some accountability when they call you something vile.


https://www.theregister.com/2022/09/16/texas_social_media_la...

Depending on what the Supreme Court decides this might be less far away than you think.


> Except I'm terrified to imagine a world where Meta might have called police on me based on the output of a neural network image classification.

Don't worry, Google's already doing it. And destroying your digital life if you've put too much stuff in their care. With the characteristic Google customer service of "go fuck yourself".

https://archive.ph/W41mf


"We" don't need to retreat back into smaller forums. Despite the huge number of false positives flagged by content review systems, they still impact less than 1% of active users. So everyone else will continue using it. Facebook is terrible and unethical in many ways but it's still the fastest, most convenient way to share pictures and updates with friends and family scattered across the world. I don't have enough hours in my day to pursue other options.


I’ve flagged 100s of illegal firearms sales on Reddit and FB and not a single one has been taken down.


Perhaps, but so what? At that scale there will be huge numbers of both false positives and false negatives in any content moderation system. If you dig into any popular online classified sales site you can find some illegal items.

Criminal activity is quite a different thing than censoring legal content which possibly violates corporate terms of service. If you have evidence of an actual crime then you should report that to law enforcement instead of expecting a private for-profit company to handle the incident.

And absent further hard evidence, I am frankly skeptical of your claim. Most people aren't experts on the nuances of firearms sales laws in various jurisdictions, so a post that appears to be soliciting a crime might be entirely legal (or vice versa). I don't really use Reddit, but I've been on Facebook for years and have never seen a post for an illegal firearms sale. Do you at least have some screen snapshots?


It reminds me of the difference between hunting and killing your own game, and factory farming. Something sinister emerges at those scales.


That's life under the Techiban


In Science We Trust.


The problem is that no automatic method we have now will catch context. There is a difference between "Jews were murdered by Germans during war" and "Jews destroyed German economy before the war" that will not be recognized by any machine we have nowadays. The first is true, the second is bullshit, how algorithm can now this?

More, even if Facebook employees humans to do moderation still for some contractor from Asia "Jews destroyed German economy before the war" might not be easy to verify. For a contractor maybe Jews did that, who cares, I have 20 seconds to moderate this and move to other post. The same way if I was asked to moderate some historical details about India-Pakistan conflict or other historical facts about Asia, Africa I have no knowledge about at all.

I was reading quite a lot recently about war in Angola and still I have doubts which side was "good" and which was "bad", besides that people who lived there were hurt in the history like almost no other nation.

Even worst. Some, especially historical, facts are judged from different perspective. For instance in Poland Napoleon Bonaparte is a mythologized person that brought hope to Polish hearts to get back their homeland [1]. From the point of view of someone from Austria or Italy, well, Napoleon is considered to be far from hero.

We don't have to go back that far in the history. US intervention in Afghanistan or Iraq can be seen differently depending on somebody views.

How to moderate all this?

[1] Fun fact: not a big surprise that Napoleon didn't give a crap about Poland, he even refused to give them a proper King at the short time he could (he has chosen some Saxon prince). At the end, when Napoleon lost, remains of Polish military units were sent to Haiti to help France to maintain their colonies. Many Polish soldiers died from tropical illnesses there, many joined Haitians as they saw that those people were fighting for their freedom like Poles were.


You are presupposing that for some reason Facebook must do this. If they can not moderate then their service is defective. The fact that they want to make huge gobs of money does not “force” them to offer a defective service, they can just not offer it.

If a construction company said: “The only projects we can make a profit on are skyscrapers, but we do not know how to make a skyscraper without having it fall down and kill everybody in it.” They are not allowed to build skyscrapers no matter how important it is to their bottom line.


> If they can not moderate then their service is defective

Well wait a minute, couldn't you say the same about the ISPs that host websites in the first place? Isn't the standard pro-big-tech-censorship position "if you don't like it, make your own website"? If Facebook has to moderate content (according to any standard) in order to exists, why don't hosting providers also have to moderate? (FWIW I'm anti-censorship)


I am responding to the article and the poster’s response in context.

The article asserts that Facebook’s moderation is defective. The person I was responding to presented a standard generic argument that is of the form: “Yes, it is defective, but the problem is too hard for anybody to produce a non-defective solution. Therefore, the provider has no choice but to provide a defective service.” I am arguing that is untrue. If a service is defective, it can and should just not be offered.

Note this is entirely contingent on the service being defective according to your value system. I have made no claim as to whether or not I agree with the specifics here, just that the generic conditional argument presented is flawed.


The argument that you’re making is extremely flawed.

It is similar to: car manufactures can’t guarantee that their cars won’t kill people therefore their products are flawed and shouldn’t be sold. In this case, the user is held liable for ensuring that it is safely operated.

By your logic, we would stop building roads or ask car manufactures to stop selling cars because people cause accidents that kill other people.

The condition “service providers must moderate content and adjudicate disputes” is what’s flawed.


No, the argument I am making is:

The Rolling Stone thinks Facebook is providing a defective service (i.e. a service that is net harmful). If you agree with that contention, then you should also agree that Facebook should not offer that service. The comment I was responding to was making the generic argument that: “The problem is too hard. Nobody can make a non-defective solution. Therefore, the provider has no choice except to provide a net harmful service.” That is a flawed argument.

You may also disagree with the premise: “Facebook is providing a net harmful service”, but that is independent of the invalidity of the argument presented which assumed it was providing a net harmful service, but they should be allowed to do so anyways due to the argument presented.


No the argument makes sense. If Ford can't guarantee the Pinto provides an acceptable degree of safety therefore the Pinto is dangerous and shouldn't be sold.

The previous poster didn't say all ISPs or all social networks should be banned. They were simply talking about Facebook.


I think the contention is that hosting is a service. Just as you are suggesting Facebook is offering a defective service (that you said should not be offered), then so are hosting providers as they similarly can't moderate granularly.


Because Facebook and the hosting provider are at different levels of the networking stack?


I’m guessing that you didn’t hear about Cloudflare and Kiwi Farms?

There is no stopping.


> I was reading quite a lot recently about war in Angola and still I have doubts which side was "good" and which was "bad", besides that people who lived there were hurt in the history like almost no other nation.

Norm MacDonald (I didn't even know he was sick) is said to have made the following quote, which is an interesting filter to look at all you know about history through.

"It says here in this history book that; luckily, the good guys have won every single time. What are the odds?"

> will not be recognized by any machine we have nowadays. The first is true, the second is bullshit, how algorithm can now this?

Forget algorithms for a second, even humans can't adequately judge nuanced issues, particularly issues that they're unfamiliar with and lack the context around, and particularly with a definitive time limit to work against.

Now think about how social media giants operate. They have teams all over the world, say in Bangalore, trying to judge the nuanced political arguments of foreigners having discussions about their own country's history that they don't know intimately. Oh, and they probably have to judge most issues in less than 30 seconds or they'll be too slow to keep their job.

It's like asking an average American to intelligently weigh in on some complicated political argument around Kashmir with a few seconds to read a post and decide who's claim is right. It's absolutely ridiculous that this is the moderation standard that exists.


> It says here in this history book that; luckily, the good guys have won every single time.

Is that really true?

The conquest of the Americas is near-universally seen as "evil" defeating at least "innocent" if not "good". Leaving aside the people who say "The Aztecs had it coming".

The Roman empire did some pretty shitty things, that most people would recognize as evil (slavery, Celtic genocide) but is still regarded warmly today as the ancestor of modern Western society, morality, and culture. That counts as a "win".


> Is that really true?

That's sort of the joke. It's another way of saying "the victors write the history books"


I understand the joke and I know the victors write the history books, by virtue of being alive. But they don't always make themselves look like the good guys in those history books.


> they don't always make themselves look like the good guys in those history books

I think judging past history reasonably from our current perspective isn't quite so easy.

From the perspective of today, virtually every human that ever lived in the past had views that could be considered some kind of racist, sexist, homophobe, religious extremist, etc even if they were very decent humans by the standards of their day. Even the great abolitionists, philosophers, people considered to be saints, or other humans that tried to be wholesome in their time likely had some views that would be considered totally repugnant to many today or committed actions that were considered reasonable then, but akin to war crimes now.

From the perspective of 100 or 200 years from now, I'm sure everybody living today around 2022 will be considered to have committed gross and obvious crimes against decent human morality and will be considered to have had totally backwards thoughts on something or another. I'd hope proper context is taken into account when they look back at us, so I think it's fair to try and do the same when we judge the past.


Yes, we are mostly all meat-eating, insect-killing, pet-owning monsters.


>But they don't always make themselves look like the good guys in those history books.

Of course they do. Do you have any examples to the contrary?

Many times, their distant ancestors look back and realize their actions were poor, and revise the history books, but that usually takes generations at least.


Far from all Native tribes were "innocent". Many were simply violent assholes, if not Aztec mass sacrifice grade.


> Many were simply violent assholes

More violent than the Europeans that conquered them?

And I think we already covered the Aztecs. It's not like the Europeans treated non-Aztecs any differently though.


Furthermore, you can publish "The effects of the Jewish population on the 1930s German economy" which can be either a genuine bona-fide analysis, or something which essentially boils down to "Jews destroyed German economy before the war".

I think the "reddit model" where you have smaller communities with community mods works much better than the Facebook or Twitter model where there's one "global community". Not that reddit's moderation is perfect or that you can 100% rely on community mods, but overall, it seems to work much better.


Reddit has a very bad antisemitism problem. It's a cesspool.

By no means is reddit 'the model'. In practically any sub except the explicitly Jewish ones, I will find an avalanche of antisemitism on any post that touches on Judaism/Israel/Jews.


It's bad now, and only getting worse. The moderators of top subreddits (like PublicFreakout) are openly in favor of marginalizing Jews, they'll just use the word Zionist instead. You can report a comment like "Jews don't deserve to live" and Reddit will automatically respond within a few hours saying the comment didn't violate their content policy. You can visit the subreddit AntisemitismInReddit for hundreds more examples.


And Facebook is fine with "Russians don't deserve to live" comments.


Reddit has a bad anti-everything problem.

That place is a cancer.


The Reddit model allows paid foreign agents to become volunteer community (subreddit) moderators and then use that platform to sow division or push a biased narrative. How much do you think the Chinese government would pay to subtly emphasize or de-emphasize certain stories on a huge community like r/news or r/politics?


Ten cents.


> Furthermore, you can publish "The effects of the Jewish population on the 1930s German economy" which can be either a genuine bona-fide analysis, or something which essentially boils down to "Jews destroyed German economy before the war".

When I was in college, I was looking around an FTP server and found a holocaust denier book. So I read it. Well, more moderately skimmed (not lightly).

It was exactly like this - it purported to be a sober view of history, well cited and no name calling. Literally none of the references were to anything real; it was all fabricated bullshit trying to push the reader to a particular conclusion (Jews are bad).

Not being able to tell the difference is entirely the point. These bastards are sneaky.


> "...it purported to be a sober view of history, well cited and no name calling. Literally none of the references were to anything real; it was all fabricated bullshit trying to push the reader to a particular conclusion (Jews are bad)."

But the problem is that censorship assumes that you are too stupid to come to that conclusion yourself and must be protected from the "misinformation".

It also robs the marketplace of the ability to hear the legitimate criticisms and the opportunities to expose said bullshit.


Companies have the right to handle this issue however they wish. Generations of politicians have ensured it, at least in the US. That's just how it is.

With regards to:

> But the problem is that censorship assumes that you are too stupid to come to that conclusion yourself and must be protected from the "misinformation".

> It also robs the marketplace of the ability to hear the legitimate criticisms and the opportunities to expose said bullshit.

Do you think the public on the first count, and the marketplace on the second count are doing a particularly admirable job here? Because, I don't. And that failure comes in no small part because of other vested interests who prop up said bullshit, because they see an opportunity to profit and gain more influence from it. How do you propose we address that?


> "Companies have the right to handle this issue however they wish."

But that's precisely the problem these days. We have seen recently that the government has been colluding with the tech companies to censor content that they deem "misinformation" and individuals whom they deem "problematic". It's no longer just about a private company deciding what they will or will not allow on their platform.

https://nypost.com/2022/09/01/white-house-big-tech-colluded-...

> "Do you think the public on the first count, and the marketplace on the second count are doing a particularly admirable job here?"

It's not about whether the "group" is coming to the decision or conclusion that you or I would like them to reach. It's about whether we, as individuals, have the right to view the totality of the evidence and arguments and then make up our own minds.

Some people see through the bullshit and some don't. That's the nature of humanity. And, to be fair, what you and I might think is bullshit might turn out to actually be completely the opposite. We're not infallible. But the fact that any of us might make mistakes in judgement or analysis is not a valid reason to prevent unpopular or unverified (or however you want to label it) information from being disseminated or accessed.


Nobody has a right to see all information in the world. Free speech is an ideal, not a right.

The first amendment just ensures the government cannot punish you for what you say.

If you are arguing for implementing that ideal, well and good. I disagree that in our current media ecosystem that that is a good idea. But I cannot fault you for advocating for an ideal.

If you are suggesting the public has a right to see all content on a platform without any moderation, you are wrong. No such right is enshrined in US law.


Reddit literally had /r/Holocaust controlled Nazis who would post Holocaust denial on it for years until reddit for too emberassed


Seconding reddit being very extremely, explicitly antisemitic. It's really just /pol/ with slightly bigger words much of the time.


You miss his point completely. Whatever problems "reddit" has, they're limited to small communities. As much as I think reddit has a huge left bias, there are huge, huge numbers of right leaning communities as well.

I don't read /pol, /publicfreakout or any of these other communities and that means I am completely unaffected by whatever nonsense they have going on.


Sure, minority issues are often limited to minority communities. I don't read r/publicfreakout either, but as a moderator of r/Jewish I can see the impact it and many other subs have on our community. You have your standard malicious crossposting and trolls, which we have good enough ways to deal with. Antisemitism from other subs leaks and grows and we often get brigades of intactivists, conspiracy theorists, BHI-sympathizers, you name it. Reddit's new Crowd Control system helps but it's not perfect. Good luck if anything happens in Israel (which it frequently does), you may as well just shut the sub down for a day.

Reddit shuts down other kinds of hate, the double standard is glaring. The fact that it doesn't impact you personally is so not the point.


If anything you’re understating the problem.

The problem with “Jews destroyed German economy before the war” is that it’s extremely vague and difficult to verify. There’s no good basis for the claim, but it’s not even a historical detail that could be easily verified; it’s more of an overarching theoretical opinion.

As for “Jews were murdered by Germans during the war”, sure. That’s an extremely well documented fact. They were also murdered by Romanians, Lithuanians, Bulgarians, Hungarians, Ukrainians, and other collaborators, but in context, we know that the Germans were organizing the whole thing. We also know that there’s a different context today in 2022 where some people might want to emphasize and others might want to minimize the complicity of Ukrainian collaborators.


>> As for “Jews were murdered by Germans during the war”, sure. That’s an extremely well documented fact.

It’s sad that you’re blind to the fact that that statement is just as ‘racist’ and untrue as the original.

Jews were not murdered by Germans.

Rather, Some Jews were murderer by Some members of a political party that was primarily — but not exclusively — German.

The overwhelming majority of Germans never murdered anyone.


You're attacking a strawman. Not only is a generous reading both entirely true and even compatible with your "corrections", the poster went on to add nuance such that it doesn't require any generosity whatsoever to read it that way.


Surely, nobody in Dachau knew what what have been happening in one of the very first nearby concentration camp on the daily basis. It originated few years before the second world war. Buchenwald, matthausen, gross rosen ( typical work camp but not less lethal than the concentration camp - it was German before 2 world War), all German, on the German soil constructed by the German people


> At the end, when Napoleon lost, remains of Polish military units were sent to Haiti to help France to maintain their colonies.

Just a minor correction, the Polish were sent to Haiti in 1802, way before Napoleon had started losing.


Yeah, Haiti broke away in a slave rebellion before Napoleon sold Louisiana to the United States to raise money for the war effort so it wouldn’t make any sense for the Polish to be sent to Haiti at the end of the war.


> The problem is that no automatic method we have now will catch context.

I disagree. The problem is that nobody is willing to be realistic about the limitations of automated moderation and proceed accordingly.

If we can't create an automatic method that catches context, the solution isn't to bemoan that AI can't magically do what we want. The solution is to remove the rules that require AI to understand context in the first place, because it is fundamentally outside of our technical ability, and any attempt to achieve it will fail.

The problem is people who think that context-based censorship is reasonable for a massive platform. It simply is not. It is reasonable at an individual level. It is reasonable at an interpersonal level. It's even reasonable at a small-group level, where specific individual human beings who are invested in the community can be aware of these context issues.

It is not reasonable at Facebook scale, full stop. Facebook should not be in the business of deciding to ban things like this. That is a responsibility that belongs at a lower level. What does that look like in practice?

If an individual posted it on their wall:

* That individual uses their judgement and chooses to post it or not

* The people who see it use their judgement and click the block button if they don't like it

If an individual posted it in a small group:

* The group can socially police such actions by commenting that they are upset by it

* The group's administrators can privately reach out to the person who posted it, explain that they can't post such things in that group, explain why, and explain what actions they could take to remain in good graces

* The group's administration can make a judgement call and remove the post, not on the basis of crude keyword detection, but on the basis of human understanding

If an individual posted in a large group:

* The large group can adopt clear and unambiguous rules that do not require context to administer, and enforce them accordingly on a case-by-case basis

* The large group can pre-commit to not dealing with such issues, and require their members to deal with it privately, like human beings

Trying to automate this process will always fail, and it will cause massive false-positive and false-negative issues as it does so. Engineers used to understand these concepts when I first entered industry 20 years ago. It's very disappointing to me that they either can't or won't now.


The obvious legal response is/should be: don't make an algorithm take precedence if it is _that_ broken (incorrect, ineffective and unfair).

Even from a product point of view, that's basic: your product feature doesn't pass quality, you don't ship it.


From my (very) limited understanding the Angolan Civil War had no good sides.

It was a plain power struggle where two superpowers decided to invest their resources to deny the other a base in Africa. The local proxies were fine with this since they wanted nothing but to kill each other. The result was a bloodbath that went on for decades.


Provide services that force people to have skin in the game (non-anonymous and/or paid). Don't connect the whole goddamn world together, the world is not a melting pot. Let those in the network flag/block/defend/vouch so it aligns with whatever culture is on that particular network.


Maybe slight tweak to this "the world is a melting point, it just melts very slowly :)"


The world is a lava lamp. The blobs coalesce and shift around but never for too long in the same spot.


It's gotta be crowdsourced. Like reddit/hn voting. But smarter.

Like voting weights the value of peer's votes and such.

I think it's the only way.

Otoh, the true Lord of the Flies might emerge that way. Maybe democracy is inherently flawed. I dunno. Experiments are called for.

How do we test social media designs?


Slashdot used random moderation (and meta-moderation) and I feel like that worked out pretty well. If I didn’t want to see goatse, I could set browsing to +2 and not have to worry.


Compositionality issues in AI


Easy: stop moderating. Just allow everything.


"Crime sure is hard to stop."

"Well, just make everything legal."


Allow legal things. Block illegal things.

It seems crazy that it's impossible to find a major platform that has this policy.


Pornography is legal.

If Facebook allowed pornography, it would quickly overwhelm the platform due to engagement metrics.

It would make the platform unusable.

Vile hate speech? Completely legal.

But its mere presence would turn away huge numbers of users.

It would make the platform unattractive and hurt the business.

It is in the platform's best interest to block otherwise legal things.


> If Facebook allowed pornography, it would quickly overwhelm the platform due to engagement metrics.

You're arguing with hypotheticals even through real world examples exist.

Reddit has lots of porn and it's nowhere to be seen on the frontpage.

Allowing something doesn't mean you shouldn't classify it and filter it.


I specifically mentioned Facebook because of how Facebook's news feed algorithm works. It's engagement based, and those algorithms are easy to saturate to specific types of content.

Reddit has a fundamentally different approach to front-page content and discovery, and moderation in general.

But even on Reddit, which is fairly permissive, they have hard boundaries which have little, if anything, to do with the "law".


But this isn't true. Social media was pretty much a free for all (except for porn) until 2015ish and it was growing rapidly the whole time. The whole argument that this type of content will drive away users is completely contradicted by history.


Social media was not a free for all before 2015. I … don't know why you think that is true.

This story is from 2013: https://www.pbs.org/newshour/classroom/2013/06/facebook-and-...

Maybe super early it was a free for all, but it had a lot less users then. The impacts mattered less.

These interventions became important as the companies grew, and needed to attract the largest possible number of users.

There are places with VERY open content policies. You can join them today.

Those places attract a niche audience and I'd wager always will.


Seems like a good way to may Facebook much much worse

It's not hard to see why major platform has such a policy

Also I don't want people to get blocked for pirating stuff. Or weed.


> "Crime sure is hard to stop."

> "Well, just make everything legal."

It's a bit of a straw man to say that GP meant we shouldn't enforce the law in the real world.

GP wasn't very clear what he meant, but presumably he's referring to not doing excessive moderating and instead rely on the law mandates.

More ~~laws~~rules, less justice. When you have a lot of internal policies, you're inevitably going to have ridiculous results such as this one. If you only follow the legal rules (which you have to) there's less unfairness.

(Of course, you have to have some internal policies such as not allowing spam, but the point is: the fewer onerous rules, the better.)


"Only worry about the law" is not a viable moderation strategy.


Well, but that is literally one of the arguments for legalizing drugs.

The issue here is more like defining what crime is, though, not stopping it. Stopping stuff on FB is easy. Figuring out what to stop, is hard. Figuring out if a disputed case should have been stopped is hard. For the 'real world' we have parliaments and courts, but there's FB has only the equivalent of police, not the other parts of the system. It is, in effect, a police state.


Yes, some things are hard and impossible to get right 100% of the time.


What about only stopping (moderating) crime, then?


I was just talking to an author who published an ad on Facebook for their sci fi book. They had the word "beat" in the ad. The FB algo said that words intills people to violence, banned her account for 2 months, and kept all the ad money.

Funny enough, her book is about the dangers of an algorithm-based AI supersystem...

Facebook is the worst.


I realize this was a weird time and masks have been a complicated topic. However:

I advertised some masks my grandma has sewn on FB. It was ok, expansive clicks like always but ok. This was running for at least a week or two. But when I changed some text it triggered a new ad review and within no hour my account was banned. It didn't even took a hour to deny my appeal as well and my account was lost forever (without ever naming any reason)

This account had 50+ (harmless) groups with way over 100k followers as well as a few thousand dollars in ad spend just that year. I didn't socialize at all on Facebook which I guess made my account fishy to some degree but all data was correct and passport verified.

This is the reason I completely ignore Facebook and all its platforms for whatever reason these days. I don't care if it could help my business, it's not worth the trouble.


> This is the reason I completely ignore Facebook and all its platforms for whatever reason these days. I don't care if it could help my business, it's not worth the trouble.

That's basically my stance since 2004 when I first heard about them. I lost some opportunities, I guess, but it also allowed me to tell everyone how I feel about data privacy while keeping a straight face.


> it also allowed me to tell everyone how I feel about data privacy while keeping a straight face.

Same here. I not only don't condemn people for using Facebook, I almost always defend their use of Facebook. But since I mean what I say when I'm talking about the site, I haven't had an account in at least a decade and I block the domains. I dislike the site, not its users.


In my experience, Meta doesn't care about you unless you have a multi-million budget or your buying through an agency that has the right connections.


I think we have even seen one or two examples here on HN of multimillion dollar accounts which hit inexplicable walls suddenly.

I don’t know if any one customer has enough money to get proper first class human service with Facebook.


Did you ever consider starting the arbitration process with Facebook?


I am not aware that is a thing. My googling back then led me to believe that there is nothing after the appeal.

But honestly I don't care anymore. Their ad platform barely worked for me and their numbers didn't match my log numbers for years before this happened.

I haven't had any other use for the platform either way.


It's fearsome how they filter such words without looking at the context or giving the benefit of the doubt. Now, you have to proactively reword your ideas to make them fit the invisible mold. How has newspeak been going for the public discourse ?


Double thumbs up good.


This reminds me of the short story "Computers Don't Argue" by Gordon D Dickson. The format is a series of correspondence between individuals regarding a book. The plot starts with a man attempting to deal with the situation of having been mailed the wrong book by an online retailer. It then proceeds to take a turn into the dark.

https://www.atariarchives.org/bcc2/showpage.php?page=133


Yeesh, that's horrifying. Written 1965?!?!? Prophetic.

I once had a similar deal when I lived in a house with several guys for a year in college. We divvied up the utilities, and I was in charge of the phone bill. I set up automatic payment, and never had a problem. But 5 years later, the phone company accused me of skipping my June payment, and carefully deducting that amount from the "total owed" amount in every subsequent month. It's insane, and every human I spoke to agreed with me, but no one had the power to do anything about it. I could either travel cross country to go to court, with no evidence other than common sense, or just pay the $30 bill plus another $40 in penalties.


The weirdest is the email message says they reviewed and upheld.

Is the review not human? Do they run it through the same algorithm a second time?

Do they lie about the review?

Is the reviewer so out of touch they don’t recognize real violence vs just the use of “beat?”


>Is the review not human? Do they run it through the same algorithm a second time? Do they lie about the review?

Yes, yes, and yes. There's a context-illiterate AI deciding what you can say or not, even if the later heavily depends on context.

And just like it'll trigger a bazillion of false positives, it'll also trigger a bazillion of false negatives; someone can easily spread hate through Facebook, unimpeded, by simply encoding language in a way that the algorithm doesn't understand, but humans do; or with simple irony.


I think the algorithms are just minimum standard efforts which provide enough plausible deniability for FB to be able to argue that they provide safeguards on bad content.


Pretty much.

People don't often realise, but users are a resource just like any other. And if a resource is abundant, you'll often sacrifice a bit of that for the sake of something else.

So... sure, bad content (false negatives) and being handled unfairly (false positives) sours users, prompting them to leave the platform. And both things are bound to happen, if you put an algorithm reviewing stuff instead of a person. But since there are so many users, Facebook is better off doing this crap than actually spending money by hiring enough people to actually review the reports, checking the context to decide if they apply or not.

And you'll see other businesses doing exactly the same decision; that's why, for example, Reddit became such a shithole, and people outright complain about the Anti-Evil Operations all the time. Twitter is likely the same, dunno. And it shows that we cannot and we should not rely on mass social media, we should be sticking to smaller alternatives.[/soapbox]


The human review is probably someone who doesn't understand the language verifying that a word is present for fractional cents on Mechanical Turk or something.


It could well be that they are paid very little, the majority of reports are true positives, have a quota to make, and expect no consequence for false positives. If its outsourced they may have a bad grasp of the language as well.


> no consequence for false positives

This could be the key point. Penalties for letting something bad slip past, but no penalties for falsely flagging. It’s a pragmatic solution, but it is almost by definition inhuman.


Algorithm-based AI supersystem proactively defends itself.


How come they get to keep money for the service not provided? Should be strictly illegal.


It probably is, but not all jurisdictions have a small claims court and even those that do, it can be quite cumbersome compared to the lost money


Small claims are designed to be quite easy, and almost free. (You might have to spend money on registered mail). Worth it for a gripe. I was prepared to do this for a hotel refund, where they were going to charge 20% cancellation fee where I saw that in local law, while there is no set max fee, it should be just to cover reasonable costs.

I looked into it and while a hassle it is on par with renewing your car insurance level of hassle (so some hassle, but doable, a "side project"). And worth it for the "stick it to the man" factor. In the end I got it almost all back by being nice, so no need.

In addition to small claims, there are credit card charge backs.


And if you do it, Facebook will probably retaliate by permanently closing your account.


In the case in question, Facebook began by permanently closing the account in a capricious and unprovoked attack. Retaliation is irrelevent: Facebook is an aggressor.


I'd really like to see this. And after that I'd like to see a jury hit Facebook with 12 figures in punitive damages in a class action case.


There has been one 12 figure judgement in history and it was from big tobacco lying about the danger of their product which has killed millions of people over decades.

In this situation there would be no judgement because there is no right to have a facebook account therefore they can close your account because they don't like your face or indeed because you sued them. Why would you imagine that a jury would basically award you all facebook's money because they closed your account? Yes sir Mr soandso they were clearly jerks I award you Facebook now try not to be as big a jerk as Zuck and good day to you!


Doesn't FB have the right to not do business with you? What is the basis for the suit?

Anyway I do remember FB's "a cool "open source" way to do front-end development, feel "free" to use it, you can't sue us though". So maybe there is some shitty clause like that when you sign up.


> What is the basis for the suit?

Withholding your money and then when you (rightfully) sue them for that, retaliating and kicking you off the platform entirely?


Was it not a clause in the react js or so, that you lose the right to use the framework if you sued Facebook?


React is MIT licensed, so no.


> Was it not a clause in the react js or so, that you lose the right to use the framework if you sued Facebook?

This is true (if we take "was" literally). Though IIRC it was only if you sued them over patents.

> React is MIT licensed

And this is also true (now).


Criminal cases do not go to small claims court. If you sue them in a civil case, yes, that could be small claims, but if you convince a prosecutor to charge them criminally, it is a completely different legal process.


You can probably go to arbitration, which you agreed to. This is what companies often require in their terms of service. It's going to be costly for one of the parties.


I really wish more of us used that system. The laws involved aren’t perfect but they’re better than trying to get a response from their non-existent customer support systems.

I believe that in many states the company that demanded arbitration has to bear the costs also.


Terms of service probably have a clause which says that if you violate their policies you forfeit your expenditures.


It is, and they don't. FB ads are charged in arrears, no money would have been kept for ads not displayed. The GP is misinformed (at best).


What was the full sentence? I've marketed a lot of games on Facebook with much worse words (kill etc) and never had a problem. Also, kept money how? Facebook charges you in arrears.


The details matter big time. There must be tons of ads with phrases like “Beat addiction”, “Beat the crowds”, or even anything having to do with music.


Unfortunately on the internet the onus is on the defendant to prove innocence. I hardly buy the top comment as true but everyone eats it up here since it goes with their narrative


Serious? It is entirely true, why would I share that otherwise? I've been on HN for a long time, see my background :)

Your account is totally blank and joined in 2021. You also submitted an article about rethinking app development at FB. In a comment you also post you work for a large company.

Do you work for Facebook?


I don’t work at facebook but based on your response I am more likely to believe you now. Sorry if it caused any angst, wasn’t intended.

However doesn’t change the fact that a lot of things get accepted at face value if they align with what someone’s view of things is


No worries, I just was kinda stunned you would imply I had any motive to not share something accurate. It was a convo I had this morning with an author who is a friend.

(It it is just one piece of data, and people are good at finding data that supports their viewpoint of course. Given all the scandals FB is under and problems on this thread, I do think it supports a narrative of FB having massive problems around ethics and moral actions.)

Here is a fun story :)

I have an ad account at FB for a company I closed in early 2020 due to Covid.

I wanted to delete the account, but FB makes it impossible to do that. I message their support and they tell me they can remove it, but they need my ID and a handwritten letter. I am stunned. A handwritten letter??? How does that achieve anything :)

So I write out a short note with a bit of snark about a tech company needing a handwritten letter, take a picture and send it to their support chat/ticket along with my ID. They do not like that I was snarky and refuse to do anything, even after I remove some of my snark and resend it.

Thus, I still have the account and 14 support replies later they still refuse to help me.

(note, they didn't want a letter send to them via the post office, they literally wanted me to write it and send it to them. So weird...)


That’s crazy to hear! It really feels like the wild Wild West with these large tech companies - consider in contrast how much regulatory and compliance scrutiny a bank would have, they would never dream to behave like petulant little kids. I guess the more fines big tech get and the more regulations enter the space, the faster they will clean up their act.


Beats by Dre must have a difficult time.


They may well have meant that "beat"; given Apple's push back on their tracking.


I don't know anything at all about FB advertising, but is there any way someone could use canary accounts to rollout an ad to prevent this? I.e. start with a small ad buy in a low follower/whatever account, wait 24 hours for it to be flagged, and if it doesn't trigger any reviews, roll it out to the main/real account?

Not that that's practical for a mom and pop shop.


Sounds like a good way for canary and main to get banned for some policy violation.


By the same twisted logic, the Beat Generation instills people to violence.

https://en.wikipedia.org/wiki/Beat_Generation


What about the Beatles?!


Beat less? Seems to promote peace.


But less is more


Les might not agree.


Would you share the link to the book? Genuinely interested in reading


Yep, it is 5 Stars by Louise Blackwick: https://www.amazon.com/5-Stars-Louise-Blackwick-ebook/dp/B09...

Really cool author out of the Netherlands, I highly recommend the book. It is dark sci fi, and she calls it "Neon Science Fiction". Which sums the story up nicely (dark and gritty with a flippant attitude)

I believe the line she got hit with was "Can You Beat The Neon God's Algorithm?"

She also did a list on my site about books that inspired her creation of neon science fiction (and how she defines that): https://shepherd.com/best-books/inspired-neon-science-fictio...

Let me know what you think if you read it! There was one scene in the book I can't get out of my head... ever...


Thanks, I'll give it a try. Interesting site, btw.


thanks, it's a really fun project! I launched it on HN in April 2021 :)

All the topics are hooked into Wikidata on the ML side, eventually I want to build knowledge graphs using the Wikidata info and play with historical timelines etc and see what I can do...

I am working on adding individual book pages and then genres and age group data is next.


If they report that money as advertising revenue, wouldn't FB be committing securities fraud?


> Funny enough, her book is about the dangers of an algorithm-based AI supersystem...

Skynet is here.


[flagged]


No, this is the result of the laziness of social networks to look for automated solutions to solve at-scale (read: at the lowest cost) problems of their own making.

More generally this is the result of giving free rein on global discourse to unregulated companies, who ultimately took full advantage and profiteed off of it.


[flagged]


Removing incitements to violence seems like a good thing, and I would guess their customers (the advertisers) fully support it. The fact that they do it so badly is entirely on them.


Is removing incitements to violence always a good thing? Apparently Facebook makes an exception when it comes to violence against the Russians invading Ukraine.

https://www.reuters.com/world/europe/exclusive-facebook-inst...

I do support the right of Ukrainians to defend their country. But I'm not comfortable with giving social media corporate employees the power to decide which violence is good and which is bad.


FB has literally unbanned groups that were considered nearly terrorists several years ago (Azov), the second they became anti-Putin.

Goes to show how grotesque those making the rules on Good and Bad are - I am sure we can find similar stuff regarding Saudi Arabia, which is pretty much a Daesh that succeeded at gaining power and keeping it.


[flagged]


Censorious ideology is a minor point compared to the overall picture. Haphazard content removal wouldn't be an issue if the space wasn't controlled by a handful of private companies that routinely abuse their power.


You needn't read or analyze everything just the stuff that is reported and since 90% of the problem is 1% of individuals if you can keep the same old folks from coming back you will ultimately have less to process. The logical thing is that the kind of human moderation that they actually needs costs money and you acquire said funds by charging monthly. This has the side effect of making it easy to permanently ban problem children via their address and or method of payment. One need not take prepaid cards either.


> the left, with its eternal desire to monitor what everyone says or even thinks.

Would you say that the NSA is a left wing organization then?


No, but the opposition that the left once fronted to the NSA's surveillance has withered in the past ten years or so and (some) republicans have picked up the slack. Not too long ago the "war on terror" was considered to be the road to authoritarianism, now democrats are openly championing making it a new domestic war on terror. To those who opposed the war on terror from the beginning it's scary how quickly this opposition was abandoned by some once they could be the ones to wield it.


[flagged]


Democracy isn't a king of the hill match, if some loonies managed to take over the capitol they'd just get siege'd out.

How do you honestly think this would go? they take the capitol and trump shows up and says "im the president for another term" then everyone goes home and ignores the corpse of Mike Pence?


> How do you honestly think this would go?

They destroy the certifications of the electoral votes from the states (as almost happened[0]), causing various states to (disingenuously) disagree about how to replace the lost documents.

Enough FUD (and lawsuits, and delays) would be generated during this period of public disorientation that the Republican party could exploit the ambiguity of the Constitutional phrase "a majority of the whole number of Electors appointed"[1] and trigger the contingent election procedure described in the Twelfth Amendment.[2]

Since a majority of states at the time had Republican representation, they would have elected Trump and the Democrats would have not been able to stop them, even if Trump did eventually let them back into the Capitol.

[0] https://www.businessinsider.com/senate-aides-rescued-elector...

[1] https://en.wikipedia.org/wiki/Electoral_Count_Act#Majority_o...

[2] https://en.wikipedia.org/wiki/Contingent_election


So your thinking is that the republican party, which by and large already disagreed with trump(to the extent a republican was the alleged "target" of the riot), would side with him after his supporters murder Mike Pence, said republican?

You can dislike republicans all you want but this is just another level.


I wasn't supposing that the insurrectionists would succeed in murdering Pence, or that the entire Republican party would unanimously back Trump, just that enough Republican-run states would raise questions about the legitimacy of the (replacement) electoral certifications (which they might refuse to produce) in order to trigger the contingent election.

In such a circumstance, it's hard to imagine the state delegations of Republican states deciding to throw away a perfectly legal and constitutional opportunity to elect their guy to the presidency. If it makes you feel any better, I would expect Democrat delegations to vote for the Democrat candidate if there were ever a contingent election, even if that candidate didn't have a plurality of electoral votes (perhaps at least justifying their decision by pointing to the popular vote, which their candidate might nevertheless have won).


If the US is so fragile that a box of paper is somehow key to the stability of an entire empire, we're pretty much doomed.

Also your statement that "Since a majority of states at the time had Republican representation, they would have elected Trump" is laughable. It's well known that Trump asked multiple Republican governors to "find some votes" and they obviously did not do this. Even if someone is a demagogue that doesn't make them willing to commit a career ending felony. After all, being a demagogue got them a long way. Not by falsifying election documents.


None of them did this because it has an absolutely awful cost v benefit ratio. In the present scenario they are likely to have a bright political future where the maximum downside is likely to be a political fizzle followed by a lucrative retirement whereas maximum downside of attempting to overthrow government is death or prison.

Comparatively playing for time, doing nothing, and then voting Trump are all low risk activities which is why it is entirely believable that they should come to pass.


>Also your statement that "Since a majority of states at the time had Republican representation, they would have elected Trump" is laughable.

Actually, it's not. The way this works (I really hope you're not an American, because if you are you really should know this) is that if the counting of electoral votes is sufficiently disputed (in this counterfactual case, the vote certificates were destroyed), deciding the outcome of the election rests with the US House of Representatives.

If that were (and it has, several times in US history), to happen the members of the House would vote (on a state by state basis, not each representative voting individually) on who was to be the President.

Since a majority (26 or 27 out of 50, IIRC) of states have Republican majorities in the number of House members, a House vote would likely have gone Trump's way[0].

This is something of a peccadillo of the US Presidential electoral system that probably should be reformed[1], but it currently is the law of the land.

[0] Which is why it was so important (at least for the Trumpists) for the proceedings to be disrupted. It was the final opportunity for them to overturn the clearly expressed will of the people.

[1] Because regardless of which party's state cohorts have a majority, that's a supremely undemocratic way to choose the winner and is a relic of the state of the states (as essentially separate nations banding together for defense) in the late 18th century.

Edit: Fixed typo.


Under that logic, the US was over thrown back in 2000 when the supreme court decided the outcome of an election.


>Under that logic, the US was over thrown back in 2000 when the supreme court decided the outcome of an election.

That's a really reductive (and, in my view, incorrect) take on the issues in Florida in 2000. It was a hot mess, and I think (IANAL, so I may well be talking out of my ass) the Supreme Court should have kicked this back to the Florida courts, but all such activity was within the bounds of the US Constitution.

The absence of the Supreme Court decision in Bush v. Gore, at least AFAIK, wouldn't have changed the outcome of the election.

I'd be interested to know what logical steps lead from what I wrote to "Bush v. Gore overthrew the US government."

Would you mind sharing that line of reasoning? Because It's not at all clear to me how you got from point a to point b. Thanks!

N.B.: I did not vote for George W. Bush in 2000 (or 2004, for that matter)


The government follows the processes of the government. That's how it works. Keep in mind, there is literally 1 mention of the Supreme court in the constitution. It says Congress can "ordain and establish it". It doesn't say anything about Congress authorizing the Supreme Court to decide the outcome of elections, which is what happened. So if we strictly stick to this idea that somehow one election is this critical, then the US government ended back in 2000. The electoral college results were basically just discarded by SCOTUS and Bush was appointed president.

Your argument is that "Republican governors" could follow the processes of the government to somehow influence an election. I don't even think what you're describing is possible. But if it is, well that is the government just following it's own processes. If that makes you upset, you apparently feel that our government is not legitimate. The Republican vs Democrat vs whatever is irrelevant. If the government decides it doesn't like the outcome of an election & throws it out, that's the governments choice.

You're in the same boat that somehow champion's the idea the Roe v. wade is the "correct" decision of the Supreme court on abortion. Guess what: the supreme court basically just goes with whatever is popular at present. You love the idea of the fairy tale high school civics course where the US is some bastion of righteousness & freedom. How many times did the Supreme Court rule that slavery was all fine & dandy? Apparently enough we fought a war over it and had to get Congress to authorize amendments explaining that slavery was in fact not OK.


Neither democracy, nor decency, nor legitimacy, nor corruption are binary. One can say that the 2000 election was decided in a corrupt way while understanding that even such a decision left the electoral process largely as functional as it was in 1996 and decisions on the relative merits of different courses of action are left to be judged on their own merits. All your arguments are games and non arguments that refuse to engage with any topic at all in a meaningful way.


Your comment doesn't reflect anything I wrote, nor does it even reflect the ideas underlying what I wrote.

So I'm still not clear what chain of reasoning you're using here.

I'll recap my statements to make sure I'm being clear.

You said:

>Your argument is that "Republican governors" could follow the processes of the government to somehow influence an election. I don't even think what you're describing is possible.

That's not my argument at all. Nor did I mention state governors (Republican or otherwise). Rather, I was referring to the 12th Amendment which states, in part:

"The person having the greatest number of votes for President, shall be the President, if such number be a majority of the whole number of Electors appointed; and if no person have such majority, then from the persons having the highest numbers not exceeding three on the list of those voted for as President, the House of Representatives shall choose immediately, by ballot, the President. But in choosing the President, the votes shall be taken by states, the representation from each state having one vote; a quorum for this purpose shall consist of a member or members from two-thirds of the states, and a majority of all the states shall be necessary to a choice." [emphasis added]

As I stated in my initial comment, 26 states[2] had Republican majority delegations. Had the election gone to the House for resolution, it's likely that Trump would have been elected (Or maybe not, given the 10 Republican Representatives who voted to impeach Trump) by a vote of 26-23(or 24). Which, of course, would be in direct contravention of the November, 2020 election results.

That's to what I was referring in the comment[1] to which you originally replied. It's not clear to me how following the process set out in the 12th Amendment is akin (or even related, except that it concerned a Presidential election) to the events leading up to and including the 2000 Bush v. Gore decision.

As for the 2000 Presidential election, I'm not clear what you're getting at WRT to an "overthrow" of the government.

Since Marbury v. Madison[3] in 1803, it's been the case in the US that the US Supreme Court is the "supreme" court.

While there have been numerous (some of which you mention) "supremely" bad decisions by that court, and, in fact, as I stated in the second comment[4] to which you replied), I thought Bush v. Gore shouldn't have been decided by the US Supreme Court.

Whether you approve or disapprove specific Supreme Court decisions (I disagree with many myself, not least of which is the Dobbs decision BTW), that doesn't invalidate the decisions.

What's more, the results[4] of the Florida election, while having been reviewed repeatedly, show a really close election which could have gone either way, dependent on a multitude of factors.

All that said, for both elections, the circumstances and outcomes, while hotly debated, were certainly within constitutional bounds.

I still don't get the logical line of reasoning that takes you from the 12th amendment to the decision in Bush v. Gore. Primarily because you haven't provided one.

What's more, you seem to be ascribing a whole bunch of beliefs and attitudes to me which I do not hold.

[0] https://constitution.congress.gov/constitution/amendment-12/

[1] https://news.ycombinator.com/item?id=32872345

[2] https://www.270towin.com/2020-house-election/state-by-state/...

[3] https://en.wikipedia.org/wiki/Marbury_v._Madison

[4] https://news.ycombinator.com/item?id=32873337

[5] https://en.wikipedia.org/wiki/2000_United_States_presidentia...

Edit: Cleaned up my prose. Added the missing link, reordered references.


Your complete original statement was "causing various states to (disingenuously) disagree about how to replace the lost documents"

If "various states" happen to "disagree" then it has to be a statement issued by the governors and legislature of those states. The members of Congress are most definitely not "the states". They are popularly elected nowadays and are not selected by the Governor or by the legislature of their home state. They have actually zero authority in their own state.

If a bunch of Congressional representatives happen to disagree, that's just Congress disagreeing. Congress can make whatever rules it wants with regards to a presidential election (see Bush v. Gore as I already stated) including delegating the results of that election to the Supreme Court. I suppose in fact they could actually explicitly pass Federal law determining the outcome of a local school board election if they wanted. In fact, I wouldn't even be surprised if such a law exists already in some form.

If you can't understand that "the states" is not Congress, you have no understanding of the United States.


>Your complete original statement was "causing various states to (disingenuously) disagree about how to replace the lost documents"

I said no such thing. It was this comment:

https://news.ycombinator.com/item?id=32865748

that contains that information.

Note that I (https://news.ycombinator.com/user?id=nobody9999) am not the author of that comment.

It was, in fact, a user called dane-pgp (https://news.ycombinator.com/user?id=dane-pgp) who wrote that.

But you (apparently) ignored the rest of his comment which describes the 12th amendment process for the House to elect the President.

So. Let's sum up here:

1. You disagree with someone (dane-pgp) and yell incoherently at someone else (Nobody9999);

2. When that someone else explains, in great detail, to make sure you understand their point, you produce further incoherent ramblings;

3. You continue to rant incoherently with no apparent regard for the law or the constitutuion.

Good show. Catch you on the flip side, although I hope not.


Coups need weapons, you know. Preferably tons of them, and generally involve fun activities like killing or summarily imprisoning your political opponents.

Somehow a huge swarm of a generally armed demographic left their guns at home when attempting a 'coup'.

Or maybe they just went to protest what they thought to be a wrongful state of affairs, and the situation devolved into a riot. Who knows. Maybe you can pull off a coup against the United States with selfie sticks instead of F15s.


You're arguing against a strawman if you think the claim is "Every Trump supporter who turned up to the Capitol was part of an armed militia attempting to violently take over the government". The claim is merely that among the Trump supporters were some armed individuals who were prepared to do whatever it took to stop the process of counting the electoral votes, which would have potentially been enough to change the outcome of the election.

The fact that his supporters almost succeeded despite most of them not being armed should be a cause for more concern rather than less, since it shows what a soft target they were attacking (and presumably some were at least subconsciously aware of that, which might have influenced their decision to not bring their weapons).

Anyway, as for your specific claim that they "left their guns at home", I refer you to this[0] article from last year:

"At least three people arrested in connection with the insurrection are facing charges for carrying firearms on Capitol grounds. At least eight others carried knives or tasers at the Capitol, including two defendants who allegedly committed assaults with tasers, according to FBI and court documents. Multiple others arrested downtown and in the vicinity of the Capitol had rifles, pistols, explosive materials, and large supplies of ammunition. And communications among numerous January 6 suspects detailed in court documents indicate that many of their fellow insurrectionists were armed with guns."

[0] https://www.motherjones.com/crime-justice/2021/09/trump-extr...


https://www.dictionary.com/browse/coup-d-etat

> a sudden and decisive action in politics, especially one resulting in a change of government illegally or by force.

It is an attempted coup to ask election officials,courts, and then lawmakers to toss out legitimate ballots or wholesale discard or modify results in order to change the outcome of an election and then attempt to intimidate lawmakers with a show of force and violence intended to convince them in the presidents own words to "show some strength" by which it is meant illegally discard democracy in favor of fascism.

This statement by you reads like an apologist for a rapist disregarding allegations by virtue of re-defining rape to mean a stranger dragging a poor misdirected maid into the bushes.

It also reeks of a peculiar strategy that suggests we dismiss the actual by virtue of how little sense the strategy makes in the cold light of day. To this day I refer to this as the Connie defense in honor of an erstwhile house guest who while living in my home stole and then explained when caught that she wouldn't possibly have done that because it would be such an extreme disadvantage to do so for such a small gain. Indeed it was and I don't logically need to explicate her stupidity in order to allow the evidence of my own eyes to trivially convict her and put her out of my house like Fred Flinstone's cat.

The same women got a vehicle towed because she was speeding down the highway, while smoking a joint, and driving without a license. I doubt the Connie defense availed her then either.

We all saw the attempted coup on TV. Personally I watched it in real time on multiple monitors showing the news cameras view on one side and the insurrectionists on the other side. I saw months of blatant lies followed by a plea directly issued by the president to intimidate lawmakers and an organized if badly organized attempt to intimidate lawmakers. It happened. There was no stolen election and indeed only one side was working double time to steal it for Donald Trump.


If you think a plot to kill a governor is somehow destabilizing to the United States, you should probably read about Rod Blagojevich. The guy tried to sell a US senate seat. Corruption has a much farther reaching effect than any single assassination.


I'm really bothered by the way you casually conflate an actual war in the middle east---the longest war in American history, by the way---with a metaphorical war on domestic violent extremism and neonazism. these are two very different things.


I used quotes to indicate I'm not talking about the war itself but about the surveillance state established in the aftermath of 9/11 (and shenanigans such as the pushing of mentally ill people into acts of terrorism they would never have committed without interference), which is the clear parallel politicians themselves want to draw when they say "war on domestic terror" (unless you believe they want to start another civil war by deploying troops in the states). Same as one would with "war on drugs".


There's another way for Facebook and other social media companies to make sure this doesn't happen without employing stupid algorithms that censor people. Drop the bots, hire a bunch of people to pretend to be ISIS or Stormers, and ban[0] everyone they come into contact with. Leave everyone else alone.

They won't do this. Why?

The way that social media is structured - and the way Facebook et all make their money - is to get as many people as possible on platform, get them addicted to the platform, and then show ads. One of the easiest ways to do this is to provoke and generate outrage - as you can see on Twitter, which basically exists to turn people into public figures and then into "villains of the day". This is also why their systems reject context; because stripping speech of its context is the easiest way to construct a villain of the day.

The reality is that these outrage groups actually tend to be really, really small and close-knit. There's a small handful of people who actually feed the algorithm new extremist content, and everyone else parrots them without thinking much of it because they're angry. Extremism only looks large and prevalent because social media is designed to create echo chambers and manufacture consent. And extremists just so happen to be Facebook's best customers - people who already have outrage to play to, who will spend hours on platform, and so on. Of course they aren't going to ban their whales!

However, an ineffective bot that just randomly bans things that sound vaguely extremist-like? That gives you the appearance of Doing Something, without actually doing it, and it falls in line with the usual Silicon Valley protocol of "if it's not worth doing at scale, it's not worth doing at all". Treating extremism like a weirdly-shaped spam problem gives social media companies cover and is how we get stupid bots that think a Holocaust movie is Nazi propaganda.

[0] I do not consider taking down the content of jihadis or neo-nazis to be censorship. Jihadis and neo-nazis are groups with explicit, stated goals to do violence to groups of people for what they say or believe. An ISIS beheading video is not an artistic statement or a political diatribe, it is a threat to other Muslims. "Get in line or you'll be next." Allowing this to be spread around as if it were speech accomplishes the goal of censoring non-extremists.

If you want to argue that tankies or ANTIFA do this too, fine, but the poster I was replying to was specifically pointing out right-wing extremism.


Personally I think Zuckerberg might be a sociopath.


This is like the modern day version of not being able to search for Moby Dick on my high school's library computer.


Except you probably didn't get permanently banned from using the library when you tried to search for that.


Reminds me of how ignorant people used to be about computers 25 years ago. That could totally still happen in some areas.


The real issue here isn't that an algorithm flagged it, but that a "human" reviewed it and upheld the ban. Either a human didn't actually review the film or there's a serious lack of training.


I think something has happened behind the scenes at Facebook where there are actually not really humans doing the secondary "review" or appeals process.

See my other comment on this article for another (less important) example. I'm guessing they're simply passing at least some % of them through a secondary automatic classification system.

Why would you let fairness get in the way of revenues?


This is because Facebook is a Publisher. They will never admit it but FB/META can decide what users can see. This is the what a Publisher does.

The issue we have is that if you let the users decide what is shown on any platform it would be quite a mess.



Devil's advocate here (I won't comment on if I agree with the argument or not), but this article seems to miss the point of the section 230 debate. All of the stuff here is about what the law is now. The objection most people have is that it shouldn't be like this and we need to change or remove section 230. It specifically allows sites to be biased while also not holding them legally accountable for anything they choose not to remove. Once a social media site hits a certain scale, they can completely control any narrative they want. Should this be allowed is the real question (imo).


I tend to agree with you - getting rid of section 230 or limiting it to certain types of content providers based on some certain metric would be ideal.

I've been doing some thinking and it seems to me like Section 230 being repealed would be disastrous for smaller websites/startups/content creators.

If I create a small-scale forum - I am now legally responsible for what is posted on that platform and can be sued repeatedly into the ground until I'm not able to continue running my business/forum/whatever. If someone has the capital to do that and a mission to remove me from the internet, then they're able to.

With section 230, it has that "monkey's paw" style penalty. Sure, it would be nice to go after social media companies that continuously abuse their power and act on behalf of the government to control what you can say, when you can say it, but then you'd also potentially open up a bunch of legal trouble for smaller outlets.

If there was a way to enact something where above a certain threshold, section 230 no longer applies to your company and you need to be responsible for the content on your platform - maybe that would be ideal, but I don't know what that threshold would be - profits, incorporated vs LLC vs sole proprietorship, I'm not sure.


> to miss the point of the section 230 debate. All of the stuff here is about what the law is now

This is something that kind of irritates me about Mike Masnick (who I generally enjoy reading). He always seems so focused on the current state of the law that he seems blind to the debate.


Supreme Court Justice Clarence Thomas has suggested extending common carrier laws to cover social media. This would essentially prevent them from censoring any legal content, much like legacy telephone companies can't block users from discussing certain topics. Such a change would require an Act of Congress to amend or replace the Communications Decency Act. And there are potential First Amendment concerns in terms of forced speech.

https://www.npr.org/2021/04/05/984440891/justice-clarence-th...

So it's an interesting policy idea but I'm not sure whether it would be better or worse than the current state.


[flagged]


> Also, you often have no choice in what telco provider services you as an end-user due to monopoly grants. You never don't have a choice of an alternative to Facebook or Twitter.

Unless the Apple-Google duopoly bans the nascent competition.


This suggestion seems like it's made by people in order to just nuke all social media sites (by making it impossible to run sites that aren't filled with garbage) because they're salty that the site once moderated them fairly or unfairly


I think you are responding to something the GP never said or implied.

Meta wants you to think you are in charge of what you see but it isn't true and hasn't been for a while now. The algorithm decides. It acts as an editor pulling together a site tailored to whatever it thinks will maximize Meta revenue.


So what? "Publisher" isn't a defined legal category in this context. Even if Facebook is a publisher that doesn't create any legal obligations.


Publishers are liable for any libel they publish while platforms are not, that’s the legal distinction. When Facebook chooses to editorialize their content like they did in this instance they forfeit the legal protections of the Communications Decency Act. The first Amendment limits the power of the federal government to provide liability protection for publishers.


Bullshit. There is no such liability provision in the Communications Decency Act. You should read the actual text of the law instead of making things up.


"No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider" (47 U.S.C. § 230)


Yes that is part of the law, but it doesn't mean what you claimed. You're completely misinterpreting it. There is extensive case law in this area.


So what would it mean if Facebook were treated as the "publisher or speaker" of information they... well, publish?

This passage does something by preventing them from being so classified. Right?


That's a meaningless question because it's tangential to the Communications Decency Act. Censoring content or changing a social media feed algorithm isn't classified that way. You might not like the law but that's how it works based on the plain language of the statute and confirmed through extensive case law.


Asking about the effect of a passage in the Communications Decency Act is a "meaningless question" and is tangential to the Communications Decency Act?


>Asking about the effect of a passage in the Communications Decency Act is a "meaningless question" and is tangential to the Communications Decency Act?

Given the extensive case law[0] generated since the passage of the CDA, yes it is pretty meaningless.

Because that case law clearly defines what those terms mean and they don't mean aggregators like Facebook.

It's reasonable to question, given the moderation choices made by entities like Facebook, how much impact they may have on public discourse.

However, the meaning of the text of the CDA, and especially Section C(1) has been clarified many, many times and doesn't mean what you think it means.

Whether that's right or wrong/good or bad is a different question. But the question you asked[1], given the law and its application over the past 25 years or so, is pretty meaningless in the sense that it has been repeatedly answered (and that answer is 'no') over that quarter century.

[0] https://en.wikipedia.org/wiki/Section_230

[1] https://news.ycombinator.com/item?id=32869002


You're saying the answer to:

> This passage does something by preventing them from being so classified. Right?

Is no, the passage doesn't do anything (anymore)?

[EDIT] I think you think I think some stuff I don't. I don't even know what you're talking about when you claim it doesn't mean "what I think it means". You claimed the passage doesn't mean what another poster thinks it means, I asked what it does in fact mean, i.e. what would happen if the passage were absent, and then you told me that question was irrelevant (why?), and then this post, which also seems to be addressing some other person or something... but maybe is addressing what I actually asked? I can't tell.

[EDIT AGAIN] Hell, the wikipedia article you cited even seems to back up the (other poster's) interpretation you were claiming was wrong. I am so confused.


Your question:

>This passage does something by preventing them from being so classified. Right?

The relevant section (c1) of the CDA states:

   No provider or user of an interactive computer 
   service shall be treated as the publisher or speaker 
   of any information provided by another information
   content provider.
Is answered 'yes.' I was, however, responding to your other question:

>So what would it mean if Facebook were treated as the "publisher or speaker" of information they... well, publish?

And (thank you for calling me out on this, I should have been more precise. My apologies) the answer to that is, as decades of case law clearly shows, they wouldn't be. Which is why the question is somewhat meaningless since, as the law exists currently, it's irrelevant.

A better question might have been (and upon reflection, is probably the question you were asking), "should the law be changed to address the impacts of moderation by large players like Facebook?"

The law as it currently is, doesn't address those issues. Should the law (in general, not necessarily the CDA) do so?

That's a much more interesting (and relevant) question. And if that was the question you intended to ask, my apologies for misunderstanding.

Personally, I'm of two minds about that. Limiting the ability of actors to sue platforms over their hosting of other people's speech (and that's the important bit, to me at least) is, in general, a good thing.

That said, the big players' engagement-focused strategies and the algorithms that support them are certainly (to understate the issue) problematic, in that they promote outrage, groupthink and the demonization of a variety of folks in an effort to boost ad revenues. That's a bad thing.

However, If we didn't have such limitations, it would have several outcomes for most platforms (be they mailing lists, message boards/sections of websites, product review sites, etc., etc., etc.):

1. No moderation at all. Quickly turning any place for third party commentary (HN included) into a cesspit of spam, porn and other stuff superfluous to the goals of both the sites and their visitors;

2. Widespread removal of comment sections altogether and the shutdown of huge numbers of sites (likely HN as well);

3. Those with deep pockets bankrupting anyone who hosts content they don't like with lawsuit after lawsuit (The Better Business Bureau[0] comes immediately to mind).

Ironically, sites like Facebook/Twitter, etc. have deep enough pockets to fight such lawsuits, likely leaving them relatively unscathed by such a change in the law.

>[EDIT] I think you think I think some stuff I don't. I don't even know what you're talking about when you claim it doesn't mean "what I think it means".

That may well be so. And if I misunderstood (and it seems I did, I hope I addressed that above), my apologies.

>You claimed the passage doesn't mean what another poster thinks it means, I asked what it does in fact mean, i.e. what would happen if the passage were absent, and then you told me that question was irrelevant (why?), and then this post, which also seems to be addressing some other person or something... but maybe is addressing what I actually asked? I can't tell.

Nope. I was responding specifically to your comment[1], which I clearly misunderstood. I've attempted in this comment to correct myself and to respond to (at least as I understand it -- which may be just as wrong but I hope not) the question you did ask.

[0] https://www.bbb.org/

[1] https://news.ycombinator.com/item?id=32869002

Edit: Fixed formatting issue.


Awesome, thanks so much, this cleared it up. Sorry this ended up requiring you to write a novel.


>Awesome, thanks so much, this cleared it up. Sorry this ended up requiring you to write a novel.

No apologies necessary. I misunderstood you and was unclear in my response.

I'll try to do better in the future.

That said, I'm glad I was able to clarify. Although I am curious as to your (and everyone else too) take on how we might tweak the legal environment to address the issues under discussion.


There are UX solutions to the problem of a messy feed. Let me go and choose my exclusion filters if I so choose.


I've been working with Alex for just over a year now. He’s never political, super nice guy, focused on creating beautiful art.

This is a major blow to the team. If your not backed by Sony or Paramount, ads (especially in the first week) can be the deciding factor in whether you make it or not..

When he told me what happened, I refused to believe him "There is no way they banned you over blue eyes". Unlike Alex, I'm extremely cynical about censorship & social justice politics but even I couldn't accept they would do something this asinine.


> If your not backed by Sony or Paramount, ads (especially in the first week) can be the deciding factor in whether you make it or not..

Do you seriously believe it was Facebook that caused the film to fail?


Not fail. The jury is still out.

Independent films largely rely own word of mouth to gain momentum. But for word of mouth to work, you need critical mass.

This is where ads can be the difference maker. We wanted to reach Roy Scheider fans. Those people are not exactly the type of people who use TikTok and reply with words like "No Cap".

When you're an Independent filmmaker, you have to maximise the value out of every dollar and you have a short window of time (weeks) to make this work.

So when facebook bans you from advertising on fb & insta right before the release, its a major blow.


Do you seriously challenge the idea that social media advertising can be critical to a small film's success?


Myself and at least one of my friends have been harassed by an Instagram account whose name is "white_soupremacy". They seemingly have no content, and the sole purpose is to harass/troll other accounts. I reported this account to IG, but since the spelling is cheeky itself, it wasn't flagged as hate speech. Furthermore, IG didn't even give me an option to have a manual review of the account. I DMed someone I know that works at Meta, gave them all of the relevant information, but the account is STILL up even after a PM at Meta submitted an internal report. Something is deeply broken in that system.


My ad account got randomly shut down a few days ago too. Was reapproved after review but never was I given a reason.

I hate that big companies can get away with this.


I sympathize with your frustration.

I do wonder though... a lot of the ability for these companies to even exist at this scale is because they use a small amount of ppl in the loop.

So was it worth it to let these behemoths prosper so we can get these services for a low price, but suffer these sort of consequences?


It's not worth it and there needs to be a real alternative. Unfortunately at the scale things have gotten to, it might take regulatory action. Because the market is not going to be able to compete here.


I wonder if marketers could start exploiting this. Get something obviously innocent banned in a stupid way by algo, then use the outrage effect for promotion. They already kinda did this with people destroying stuff of companies that made some statements, this seems like the next step.


Ghostbuster's 2016's supposed "anti-feminism " campaign is an example of this. It's very common and has been forever, since at least newspapers.


Is there evidence that the misogyny was faked by the movie industry?

Or are we just assuming that all such reactions (e.g. the anti-black Ariel folks) are faked to generate support?


The reactions arent necessarily fake, its much more that they hunt out and push those (very small number) for the sake of publicity; and to offset and undermine the credibility of critics.

No one really hears any attacks against GB 2016's critics now, but before release, they were all tantamount to racists and sexists.


> are faked to generate support?

No need to fake it. Just skew your presentation of reality to fit your narrative.

https://youtu.be/UWROBiX1eSc?t=193


There still seem to be some many basics that FB and YouTube get wrong with their "automatic" video moderation.

Use people's "report" button clicks as an early indicator that something should be looked at - a big flag.

Look at certain words or phrases that are either unambiguously bad or possibly like "kill" or "why I hate ..." - a medium flag

Phrases that are less certain, = a small flag.

Block some things proactively, moderate others, react to the others as they are reported. I don't think people are bothered that bad stuff gets posted as much as these companies not doing anything about it when it is reported. If 1000 people flag a video, up to the top of the list. If the flagging is malicious, then downgrade the reputation of the accounts that flagged it. If new accounts upload stuff, rate-limit it in some way etc.

I know I am making it sound really easy but with how ever many 1000 Developers, it shouldn't be impossible for someone like FB to do this much better.


"Why I hate killing spiders & how to ethically rid your house of pests" -- Not flag-worthy at all.


Browse urbandictionary and you'll find a lot of benign nouns that are insulting or that are slurs. Prediction: there's a definition of 'spider' that is offensive up there... be right back.

Let's see... almost? There's one referring to an unattractive and disproportionate person: https://www.urbandictionary.com/define.php?term=Spider

I know urbandictionary has a lot of made up stuff, but the point is that slang is sometimes coded intentionally. Especially in unpopular niche subcultures.


This is not an exception. This is the norm. All major social media platforms operate under the assumption that it's better to ban 100 innocent people than to let one "bad" person publish something. The scale of censorship is ming-boggling. The scale of denial and ignorance about censorship on HN is even more astounding.

The big lie of online censorship is that controversial cases where most people think the person "deserves" to be banned are unrelated and totally separate from cases like this one, where it's obvious the ban is preposterous to anyone possessing common sense. They are directly related. They are created by the same systems built under the same assumptions with the same mentality.

This is not going to be fixed by "better" algorithms, because it's not an issue with the quality of the algorithms in the firs place. The algorithms seem low-quality to you because you're judging them by a standard the company running them didn't use.


The platforms do it because of armies of miserable people online looking for an outlet to their outrage.

So the platforms are stuck trying to appease the angry mob, while spending as little resources on it as possible (hence, shitty algorithms).

And I think I even know where the mob came from. Humans apparently have an intrinsic need to have some goals to be passionate about. And since the "economies of scale" have optimized away individual decision-making as outlets for passion, we see a surge of the good-old tribal instincts.

Cancel culture is certainly a progress from stabbing and eating the members of the competing tribe, it is still a manifestation of the same kind of instinct that will bring nothing good until people find (or, likely, create) a bigger problem to worry about.


To me, what you talk about with “economies of scale” is similar to the depravation of access to the power process people experience in industrial society that Ted Kaczynski described in Industrial Society and its Future. The work that our species did to stay alive, such as find food, is already handled by society, and the ability to influence the direction our lives, has been exported to corporate board rooms and legislative assemblies. As a result people now engage in surrogate activities to satisfy their needs to engage in the power process. This includes things like entertainment media, subcultures, and sports. Social media is another instantiation of surrogate activities. I think it’s no wonder that moderation is not benefiting the general public.


> The platforms do it because of armies of miserable people online looking for an outlet to their outrage.

I find this funny because I initially assumed you were talking about the seemingly infinite number of people posting the "straddle the line like it's a Hitachi wand" flamebait that typically gets axed. There's no way to make everyone happy, but I think there's a chance we can broadly agree that there's very little harm done by being overzealous when it comes to using the banhammer on outrage porn. I mean HN basically does the same thing but on a smaller scale.


> trying to appease the angry mob

I think this is half correct. They do it to prevent the angry mob from reaching out to advertisers. I don't think they care about angry mobs, themselves.


>So the platforms are stuck trying to appease the angry mob, while spending as little resources on it as possible (hence, shitty algorithms).

Actually, I'd posit that the platforms are drooling with anticipation and glee at being able to monetize that angry mob. Anger, fear and outrage boosts engagement after all.

And that's what platforms want, because increased engagement means increased ad revenue. And that's not exactly breaking news either.


The people also utilize political ideologies whose primary tenets are things like "destabilize" and "deconstruct" as a surrogate religion instead of the passé ones that have proven themselves capable of underpinning functional society for thousands of years.


I suppose your local warlord ordering you to murder people in the crusades was "functional" society and writing mean things on Twitter is "nonfunctional" society in your mind.


A society that doesn't loathe itself and goes to war to murder other people is healthier than a society that loathes itself and still goes to war to murder other people.

Neither is ideal, but one results in one place being shitty and one being okay, the other makes two places shitty to live in.

As far as I'm concerned, international politics / geopolitics is appallingly Melian and I wish it were some other way but I think it's just the sociopathic shitshow we're stuck with.


The crusades were just an example, Deuteronomy 21:18-21 says stubborn and drunken children are to be stoned to death by all the men of the city. No foreign adventurism is involved. Very "functional".

Many heroes of the bible also had slaves, and the bible does not suggest there is anything wrong with that. Although some religious people campaigned against slavery in America, the fact the bible does not state slavery was wrong, and depicts it as something God's favorite humans engaged in, was used as a counter argument to defend slave-owning by religious people.


> armies of miserable people online looking for an outlet to their outrage

I just call them "lawyers."


There is more to it than that.

Look up the use of Facebook in the genocide of Rohingya in Myanmar. I see multiple comments in this thread about snowflakes and cancel culture but they miss that social media is a profoundly powerful instigator of race hate and violence.

This is no defence of Facebook's actions here but any suggestion that a hands off approach without policing racial language needs to be conscious of the harm it has already lead to.

https://www.nytimes.com/2018/10/15/technology/myanmar-facebo...


Paywalled, so I can’t directly address the points in the article.

Do humans not have thousands of years of genocide each other long before Facebook? It’s hard for me to believe that if only Facebook did not exist, everything would be fine in Myanmar. No, almost certainly they would have found a way to kill the other tribe.


It is the inverse Blackstone Ratio!

But I think we must also look why they end up this way. My thoughts are that a small portion of the public reacts so strongly and loudly to any minor mistake that it turns into national conversations. Ironically through their very platforms and algorithms that optimize for engagement (fighting). So we then paint these small populations as representative populations and make mountains out of mole hills.

I actually do believe a better algorithm would alleviate some of the issues. But I do agree that it is not a cure-all. The problem appears to be quite complex and many aspects are driven from or coupled with factors outside the control of social media platforms.


A small portion of the public will react to almost anything in any way. The censorship regime here must surely have been created by more complex factors, like maybe:

- Dependence on advertising, exposure to advertisers who feel like their brands appearing next to anything controversial or upsetting will negative-halo upon their brand.

- Ideological uniformity amongst journalists, who highlight certain kinds of outrage and sink others.

- A need for moral validation amongst tech company employees.

And we could think of many others. Trying to distill a root cause is hard but it looks like the everything-is-connected-to-everything mentality appears frequently. Is it really the case that an advert appearing next to something objectionable makes people think less of the brand? Probably not but it seems to be a common belief. Is it really the case that Facebook is to blame for any video posted on its platform? Probably not but it's a common belief. In a thread just a few days ago there was a former Twitter employee arguing that Facebook was somehow complicit or at fault in the Rohingya genocide, which is a good example of this mentality.

You could go even deeper and ask, is this everyone-is-culpable-for-everything mentality a genuine belief, or is it a possibly sub-conscious cover for some other agenda? That is, this argument seems to work on people sometimes, so it is deployed a useful tool to advance one ideology or another? Who can really say.


The advertising point is a red herring. Adtech companies have gotten very good at targeting exactly what content their adds will an will not appear on. When they censor it is because they do not want you to see it.


> The scale of denial and ignorance about censorship on HN is even more astounding. After reading some of the comments here you're absolutely right :(


Use two line breaks to put your content on a new line. If you use one, it will all be under the same line like happened to your comment here.


> The scale of denial and ignorance about censorship on HN

Even here, where you would think people would know better, you still see people insist that it's not censorship because it's not the government doing it.


Facebook is the government. PRISM [1] makes it explicit in Facebook's case, but any corporation beholden to a government for its continued operation is a policy arm of that government.

[1]: https://en.wikipedia.org/wiki/PRISM


Just curious, can you name some large companies for which this doesn’t hold? Or are you saying that all corporations are arms of the government?


It's a continuum, not a binary distinction. People tend to say, "this is government" or "this is not government". I disagree with that model, but it's a defensible position. What's not defensible is saying "this is not government therefore the first amendment doesn't apply" to a corporation that was secretly deputized by a government agency.

To answer your question, here's an example continuum from "more government" to "less government":

1. Elected officials

2. Federal employees / appointees

3. The Federal Reserve (private but heavily controlled)

4. Facebook (private but deputized)

5. Your local bank (whose existence is a government granted monopoly)

6. Utilities (some regional monopolies / limited competition)

7. General Motors / General Electric (heavily subsidized via negative taxes / bailouts)

8. General Mills (regulated but not necessarily beholden)

9. 3M (held to general standards like OSHA)

10. You (for definitions of you that do not fit into an example above)

People act like the line is somewhere between #2 and #3, but the reality is likely past #7.


> All major social media platforms operate under the assumption that it's better to ban 100 innocent people than to let one "bad" person publish something.

Unless they pay for their publications (aka ads), in that case FB has a history of letting the most horrible shit slip "regrettably through their screening team". Bollocks.


Did you even read TFA? This is about payed advertising for the film on FB.


The reason is the movie being titled "Beautiful Blue Eyes", we live in a time of pure insanity.



[flagged]


German here. Not entirely sure what your comment is trying to convey. Germany doesn’t have that much of a free-speech history, at least compared to the US. So the fact that publicly denying or trivializing the holocaust has been illegal (since 1949, I figure) is not seen as dystopian at all. On the contrary, the ban is even widely accepted, and perceived as a good thing.

Mind clarifying a bit as to what exactly you’re criticizing?


[flagged]


I suppose if your country has just perpetrated a genocide, one might have unusual moral responsibilities.


[flagged]


The part that's always missing from this kind of argument is: then you're wrong. You are incorrect. It's mind-boggling to me that the actual fact of what actually happened never seems to matter in this argument. Look, I understand the practical problems with a law like "It's illegal to lie on the news" -- of course that's problematic: who decides what a lie is? But if you could guarantee unambiguous, 100% accurate, oracle-level determination of lies, then that law would be fantastic for society. That's of course not possible in general, but that doesn't mean there aren't very special cases where we can get close to that. There are some things that we know for 100% certain definitely happened, and also that certain awful people have certain horrifying motivations to lie about. I'm totally fine with those being illegal to lie about. If you disagree with me about it, it simply means you are an awful piece of shit. Again, that's not true in general, about any opinion or controversial issue: of course it's not! But it's true about the Holocaust, and that matters.


You were going fine until:

> There are some things that we know for 100% certain definitely happened

If anything, this is one of those things you should 100% question. Such a politicized event, almost 80 later, with most proofs being unreliable testimonies and confessions by physical force. As the saying goes, history is always written by the winners. Not saying that it didn't happen, or it was only exaggerated, or anything like that.

I'm only saying that making ILLEGAL to negate or downplay is an extremely dangerous precedent.

> If you disagree with me about it, it simply means you are an awful piece of shit.

I wouldn't care about this comment at all, if it wasn't because my question was flagged and this is not. As always, a heavily enforced voting/moderation system results in a perverted, deranged way of reality that's only perpetuated by such system.

For anyone reading this, if you want to be somewhat in touch with reality just leave this garbage forum.


The community which committed that genocide has the prerogative to decide its own moral responsibilities. If you're part of that community, and violate them, then the consequences will be as they wish.

Its deeply implausible to say that Germans have no collective right against the individual here. What you wish to say really isnt all that important, and doubly so, when many around you were wrapped-up in a system of mass torture, genocide and violence.


You can defend it all you want with "historical precedent," it's still quite dystopian whether or not conditioning for 70 years has any impact. My point is that any law against free speech probably has more sinister intentions than are presented.

"Do you have any proof?"

Read 1984 and try to understand how close laws against holocaust denial are to thought crimes.

You really don't understand what part of the timeline we're in. Time is running out.


>Germany doesn’t have that much of a free-speech history

"We've always suppressed freedom of thought and expression, so why should we start now,?"


"What are rights? Haha silly American"


[flagged]


Ironic ? per the article : "the film’s title, which refers to the eye color of a child who perished at the hands of the Nazis and invokes a key scene in the movie"


That doesn't mean it's not ironic. It strains belief to suggest that the film makers weren't aware of Hitler's belief that being blond and blue-eyed was a mark of the "superior" Aryan race


ITT, both bigots and anti-racists pretending that they know someone's obvious intentions based on no information and with no attempt at research.


Seems to me that the author was just propagating "the blue eyes are so beautiful" sort of white standard of beauty, probably not intentionally, and it would be a minor thing, except in this case it just does not fit, imho, given the topic.


Yes, and? Still not a reason to prevent advertising the movie on Facebook.


These type of editorial / advertising approval decisions happened all the time before the internet took over media. Just FB/Twitter gets crucified when they do it because they pretend to not be publishers but utilities.

Time to admit all these hosting sites are just publishers that use ML models as editors and their users as contributors and section 230 needs to be re-written to account for it.


Orwell himself wrote of it in his preface to Animal Farm:

https://www.marxists.org/archive/orwell/1945/preface.htm

> Any fair-minded person with journalistic experience will admit that during this war official censorship has not been particularly irksome. We have not been subjected to the kind of totalitarian ʻco-ordinationʼ that it might have been reasonable to expect. The press has some justified grievances, but on the whole the Government has behaved well and has been surprisingly tolerant of minority opinions. The sinister fact about literary censorship in England is that it is largely voluntary.

> Unpopular ideas can be silenced, and inconvenient facts kept dark, without the need for any official ban. Anyone who has lived long in a foreign country will know of instances of sensational items of news – things which on their own merits would get the big headlines – being kept right out of the British press, not because the Government intervened but because of a general tacit agreement that ʻit wouldn't doʼ to mention that particular fact. So far as the daily newspapers go, this is easy to understand. The British press is extremely centralised, and most of it is owned by wealthy men who have every motive to be dishonest on certain important topics. But the same kind of veiled censorship also operates in books and periodicals, as well as in plays, films and radio. At any given moment there is an orthodoxy, a body of ideas which it is assumed that all right-thinking people will accept without question. It is not exactly forbidden to say this, that or the other, but it is ʻnot doneʼ to say it, just as in mid-Victorian times it was ʻnot doneʼ to mention trousers in the presence of a lady. Anyone who challenges the prevailing orthodoxy finds himself silenced with surprising effectiveness. A genuinely unfashionable opinion is almost never given a fair hearing, either in the popular press or in the highbrow periodicals.

> At this moment what is demanded by the prevailing orthodoxy is an uncritical admiration of Soviet Russia. Everyone knows this, nearly everyone acts on it. Any serious criticism of the Soviet régime, any disclosure of facts which the Soviet government would prefer to keep hidden, is next door to unprintable. And this nation-wide conspiracy to flatter our ally takes place, curiously enough, against a background of genuine intellectual tolerance. For though you are not allowed to criticise the Soviet government, at least you are reasonably free to criticise our own. Hardly anyone will print an attack on Stalin, but it is quite safe to attack Churchill, at any rate in books and periodicals. And throughout five years of war, during two or three of which we were fighting for national survival, countless books, pamphlets and articles advocating a compromise peace have been published without interference. More, they have been published without exciting much disapproval. So long as the prestige of the USSR is not involved, the principle of free speech has been reasonably well upheld. There are other forbidden topics, and I shall mention some of them presently, but the prevailing attitude towards the USSR is much the most serious symptom. It is, as it were, spontaneous, and is not due to the action of any pressure group.

...

> It is important to distinguish between the kind of censorship that the English literary intelligentsia voluntarily impose upon themselves, and the censorship that can sometimes be enforced by pressure groups. Notoriously, certain topics cannot be discussed because of ʻvested interestsʼ. The best-known case is the patent medicine racket. Again, the Catholic Church has considerable influence in the press and can silence criticism of itself to some extent. A scandal involving a Catholic priest is almost never given publicity, whereas an Anglican priest who gets into trouble (e.g. the Rector of Stiffkey) is headline news. It is very rare for anything of an anti-Catholic tendency to appear on the stage or in a film. Any actor can tell you that a play or film which attacks or makes fun of the Catholic Church is liable to be boycotted in the press and will probably be a failure.

> But this kind of thing is harmless, or at least it is understandable. Any large organisation will look after its own interests as best it can, and overt propaganda is not a thing to object to. One would no more expect the Daily Worker to publicise unfavourable facts about the USSR than one would expect the Catholic Herald to denounce the Pope. But then every thinking person knows the Daily Worker and the Catholic Herald for what they are. What is disquieting is that where the USSR and its policies are concerned one cannot expect intelligent criticism or even, in many cases, plain honesty from Liberal writers and journalists who are under no direct pressure to falsify their opinions. Stalin is sacrosanct and certain aspects of his policy must not be seriously discussed.

> This rule has been almost universally observed since 1941, but it had operated, to a greater extent than is sometimes realised, for ten years earlier than that. Throughout that time, criticism of the Soviet régime from the left could only obtain a hearing with difficulty. There was a huge output of anti-Russian literature, but nearly all of it was from the Conservative angle and manifestly dishonest, out of date and actuated by sordid motives. On the other side there was an equally huge and almost equally dishonest stream of pro-Russian propaganda, and what amounted to a boycott on anyone who tried to discuss all-important questions in a grown-up manner. You could, indeed, publish anti-Russian books, but to do so was to make sure of being ignored or misrepresented by nearly me whole of the highbrow press.

> Both publicly and privately you were warned that it was ʻnot doneʼ. What you said might possibly be true, but it was ʻinopportuneʼ and played into the hands of this or that reactionary interest. This attitude was usually defended on the ground that the international situation, and me urgent need for an Anglo-Russian alliance, demanded it; but it was clear that this was a rationalisation. The English intelligentsia, or a great part of it, had developed a nationalistic loyalty towards the USSR, and in their hearts they felt that to cast any doubt on me wisdom of Stalin was a kind of blasphemy. Events in Russia and events elsewhere were to be judged by different standards. The endless executions in me purges of 1936-8 were applauded by life-long opponents of capital punishment, and it was considered equally proper to publicise famines when they happened in India and to conceal them when they happened in me Ukraine. And if this was true before the war, the intellectual atmosphere is certainly no better now.


I can only hope that Facebook follows this policy to the letter now and into the future. In fact, it would be a gift to humanity to widen the scope to any an all content deemed offensive to anyone. Ban it all.

Nothing will hasten the downfall of the monstrosity that Facebook has become faster that the strictest possible adherence to and advancement of this policy.


This will get overturned shortly due to press attention.

Which, sadly, is the only way large scale network moderation can work.

Get it "mostly" right, but often wrong.

In most of the cases you get it wrong, only a few people will notice. You'll never hear about it.

Occasionally, you'll get it so wrong a lot of people notice and you will hear about it. Then you can fix it.

Rinse and repeat.


Why not put the power in the hands of the users? If a person does not want to see a film that deals with race (generally), let them go and flip a switch in their settings to hide these from their view (and similar for whatever other subject may be of potential concern).


The issue has (almost) nothing to do with the film content. The issue is with the verbiage in the title, used in the ad. And it's not about who wants to see the content, it's about censoring potential race warriors.


Bizarre that they stuck with the ban after an appeal. Seems like a pretty obvious thing to fix


So what skin color can a person with blue eyes have?

Every possible.


We really need to stop thinking about social networking sites as private companies, and think about them as the public spaces they actually are. It isn't good for society for a few companies to have a strangle hold on what can be said online.

I do kind of feel for them in some ways because you have different nations with different norms and laws, but the answer isn't some lowest common denominator, and ridding the web of anything, anyone may find objectionable.


Well you shut down the idea yourself, would be completely non-viable due to jurisdiction issues, unless you'd want to follow the US position and even then you'd run into issues.

The biggest problem I'd say isn't necessarily the websites themselves but stuff like the app store and google play store, and things like infrastructure providers. Having your social media be removed from the apple store/google play store is basically killing them in the mass market, and we all know how incredibly selective they are especially when it comes to forbidding platforms that may contain porn(like tumblr for example and initially gab) while also but still allow twitter that is completely inundated with it. And that's before mentioning the downright mental idea that payment processors can just decide to not work with you anymore.

The social media itself I feel is the least of the things that would benefit from being treated closer to a public "utility" for the lack of a better word.

We're way past the point where people can just build their own stuff and be independent when targeting a significant audience(and sometimes even a smaller niche one), you need the support of payment providers, you need the support of app stores, you need the support of ddos mitigation companies/cdn's (especially in a post IOT world where your toaster is part of a botnet), and the list goes on.


I didn't shut down the idea. I just raised the counterpoint.

If Europe decided to say that you can't take down things arbitrarily. What would Facebook do about that? At this point in time a geographically splintered internet with freedom of speech seems better than what we've got at the moment.


I do believe the counterpoint kills the idea, is what I meant. A geographically splintered internet(or rather, geographically splintered "social medias") would just lead to a service showing up that is international and will grow a majority again and the whole cycle begins anew.

The only way I can see this happening would be different instances for each country, as in things would get moderated based on the country laws in question, which I suppose could work in theory, but at that point having enough support staff to actively moderate on a country by country basis is doubtful given how it is currently.


Well presumably moderators need to speak the various local languages. So I don't see why you can't extend that to don't ban people in territory X for mentioning blue eyes.

I understand they don't put enough effort into moderating, because obviously mentioning blue eyes isn't anything. But they should be held accountable for that, not getting away with it, as is the situation we have now.


Why should we consider them public spaces when they are private companies?

And social media companies only control what is said on their platform not "online" aka the entire internet as you said


> Why should we consider them public spaces when they are private companies?

As the other commenter is suggesting a change to the status quo, what legal status they have now isn't important, but instead what matters is role they serve in society.

Personally I think the US idea of freedom of speech is too anarchic to be sustainable, but even with that, the de-facto power that Big Tech has over communication (and commerce) means I think Big Tech should be held to a similar standard in this regard as any government.


What power do they have?


Big Tech?

The power of their algorithms that decide who sees what content and when. The power to remove people from platforms that are competing to become de facto standards for communication, and to do so for reasons not limited to their legal obligations.


Who is they? Are you claiming that some number of tech companies , a grouping you haven't defined, are in collusion? Do you have proof of this?


> Who is they?

Do you mean “Who is “Big Tech”?” Because I don’t want another ambiguous response in this chain, there are already more than enough we might be on completely different pages.

https://en.wikipedia.org/wiki/Big_Tech

> Are you claiming that some number of tech companies , a grouping you haven't defined, are in collusion?

Competition, not collusion.

> Do you have proof of this?

The terms and conditions for the various platforms. You know, the stuff you have to click to confirm you’ve read and agreed to but virtually nobody bothers.


Just because terms and conditions are similar doesn't mean there is collusion. Your claim is that this grouping of companies that isn't even well defined per the wiki article is taking actions as a unit


I'm not saying any such thing.

I've explicitly and repeatedly said competition not collusion.

When you asked for proof, I thought you were asking for proof that they could remove people from their platforms for reasons other than legal obligations; that is what I am saying is evidenced by the T&Cs.

You asked me "What power do they have?", and this is that, and https://news.ycombinator.com/item?id=32876080 was about that too, and was not whatever you're trying to make this about now.


We shouldn't accept this dystopia heads down. We all should call for regulation to stop this enormous companies and their algorithms, from treating everyone else as insignificant rounding errors.


Recently Farcebook has started recommending incredibly toxic anti-SJW shit to me for no apparent reason. Either they turned the "let's incite toxicity" knob to 11 or somebody is seriously abusing them.


Possibly, but not the only explanations — my feed is now more than 50% suggested/promoted content, including Fox News Tampa, even though I'm living in a different city in a different state in a different country in a different continent and almost all of my friends are further to the left than even the left-most US Democrat.

If I had to guess, the bottom has fallen out of the advertising market and every business reliant on selling adverts is getting desperate.


The FB feed has become utterly drenched in suggested content, 99% of which is trash.

I'm sure they will see some short term bump but I've got to wonder if this is finally the end for them. I'm certainly not bothering to check it any more.


Facebook still has billions of users. It's just that its hope lies outside of the West


The money isn't outside the West...yet.


It reminds me of stores, you can get a feel for how well the store is doing by how aggressively they try to shovel the loyalty card on you.

See also: magazines, watch the signal to noise ratio plummet as they try to prop up revenue with more and lower quality ads.

I get why they do it, they are in a death spiral, desperately trying to find that one thing that will save them, mostly I think it just hastens the end as the bad experience turns people away.


I always wondered why physical shops offered a much worse experience in many ways than online ones, even taking into account the obviously insurmountable logistical advantage of the latter. I mean you'd expect them to at least try to have better service?

The answer of course is that their only choice is to focus on their captive audience (people who can't or won't buy online) and extract as much as possible until the music stops.

A similar thing is ironically happening to Netflix


Just yesterday, Facebook started including in-stream ads for Creation Research. I'm a scientist who's work on evolution. i also see several other ridiculous ads that have no relevance to me.

I think it's more likely the folks running the product machine learning recommendation engines are asleep at the wheel; after all, Mark lost interest in his core product to promote AR, so why would th efolks running the core product care?


Eh, that seems pretty relevant: the ads are about evolution, you work on evolution related stuff. It's just the ad targeting engine doesn't understand that you're going to fundamentally disagree with the concept being advertised. That's probably an edge case though. 99% of the time irrelevance for advertising means you just have no need for the product being advertised, not that it's the polar opposite of your worldview. You probably don't remember the ones that are merely useless, though.


I do expect that recommendation engines should pick up on, and not show me, advertising content that is fundamentally nonscientific (and show those ads to other folks whose profiles are more consistent). It demonstrates that the recommendation algorithm can't differentiate between two clusters that use similar words, but different concept vectors.


Well, you don't know how it was targeted. Maybe they don't want to sing to the choir so such ads might be targeted at people who are interested in evolution. Still it seems unlikely that the scientific-ness of something can be determined by an AI model at all, let alone just word vectors. What is and isn't science can be hard to rigorously pin down, that's why pseudoscience is problematic in the first place.


These are all reasonable points, but since I have lots of experience building recommendation systems at places like google, I have a pretty good understanding of what the embeddings are capable of learning (even so, Google News still does the same thing occasionally).


For me it's mostly Elon Musk-endorsed, government-backed $250k/yr guaranteed return investments in bitcoin.


Indeed. By US standards I'm somewhere left of Bernie but I keep getting recommended Jordan Peterson reels on Instagram. Not sure what's going on.


[flagged]


Ah yes, the toxicity of "let people be themselves in peace".


While I have never directly encountered toxic social justice activism (not even an ex's mother whose activism made me realise "Champagne socialist" was more than just a right wing straw-man), there is no cause so pure it cannot attract numpties.

There is a video (don't know if it's real, staged, missing context, a one-off, whatever) of of some activists going to a restaurant and apparently cajoling diners to agree with them by using the slogan "silence is violence".

(Trouble is with stuff like that, in any cause, it gets amplified by both toxic opponents and socially inept supporters).


Depending on what you mean by 'directly', it's been fun to see people say people who look like me are "born to not being human" and another (a professor of education, to boot) assigning reading (not as a warning) saying "white people are born human but abused into whiteness".

There's a TON of double standards. Asian people want to cater to their own interests? Understandable. Whites would want to do the same? The first question is "who do you hate?" as if that was the only possible motivation.

If I went to America they'd call me a colonizer when only 150 years ago my ancestors couldn't even conduct their official business in their native language and some time later huge chunks of them were killed or taken into slavery by Russians - slavic people whose very fucking name means slave because they got oppressed that badly back in the day.


By "directly" I mean nobody has ever made, or tried to make, me feel bad about being a naïve [0] rich white dude.

Plenty of other things about me (sexuality, politics, my facial hair, formerly my interest in paganism) lead to people trying to get at me, but not any of the issues normally associated with SJW.

[0] I self describe as naïve here because of all the times I've had to update my world view because British history lessons lacked important details like "why did the Irish and the Indians not like British rule in the first place?" and "Kenyans and Cypriots literally took up arms against the British" and "Malta was under British rule and asked for a seat in Westminster".


“The surest way to work up a crusade in favor of some good cause is to promise people they will have a chance of maltreating someone. To be able to destroy with good conscience, to be able to behave badly and call your bad behavior 'righteous indignation' — this is the height of psychological luxury, the most delicious of moral treats.”

― Aldous Huxley


Rather much more than that, sadly. Would be much more tolerable if it was just that.


Title (and headline of TFA) is misleading, as Facebook didn't ban a film, it rejected a user from using its ads program based on the film's title. Will be interesting to see if the lawsuit goes anywhere.


It banned a film from its ad program. And the user who tried to advertise it. And the composer of the title track.

I mean, they're not banned from Facebook, but the title didn't say that either. It said that the company Facebook banned a film. Which it did.


They can presumably advertise the film under a different title, with a clickthrough to info with the proper title.

Facebook should have better credibility vetting, for things like movies distributed by recognized distributors with a good reputation.


Interestingly, according to the IMDB, the "blue eyes" title is already the second one used for the film. Why they'd choose something so dicey by Facebook's standards, when Facebook ads were supposedly such a crucial part of their business plan is unclear.


Not the composer, who had his entire account suspended from the ads platform for this. I assume the ad account for the film also would have also similarly been suspended.


Does anyone know if these decisions are made by U.S. based checkers, or is it outsourced? When FB does something so asinine like this I must consider that it’s a horrible cubicle farm in S.E. Asia.


I bet you my now defunct and cobwebbed Facebook account that they're using AI algorithms for large-scale content moderation, with individual "suspect" cases being forwarded to an Amazon Mechanical Turk style place where people make pennies on the dollar following strict lists of instructions where any deviation or any leeway in free-thought is punished by immediate dismissal.


Book burning used to be a thing. Now its content moderation.


I assume they ban pretty much everything about the period 1925 to 1945 as race, national origin and probably gender was real prominent. And talk about the violence…


I follow a FB group where the topic is the International Brigades, many of the discussions and posts are about the Spanish civil war, though not exclusively. Race, national origin, sex and violence are freely discussed and the group doesn't seem to attract any untoward attention by FB's algo's.


I would think that inconsistency in how things are banned would increase FB’s legal risk, but maybe they (like all rich things) are exempt from legal risk?


Censorship morons on parade. Obviously we need to return to a First Amendment ethos. Obviously we never should have abandoned it in the first place.


Honest question,

What would happen if we assume that those with low self esteem self-select to match the toxicity in their head in choices of social media consumption and choice of social media platforms?

Somewhat the kissing-cousin to the observation that those 1-percenters on social media that create such as GaryVee do not use sm platforms like the bottom 99 percent.

I.E., SM platforms are a mirror in showing that we still have somewhat a broken humanity society structure.


The biggest problem with this AI approach is that actual bad ads (scam, spreading hate, etc.) are getting thru.


Facebook and Censorship are synonymous


They should think to bans themselves for dystopian mechanisms


First we demanded they censor things. Then we got upset they censored things.

Turns out, there isn't one global human standard on what is worthy of being censored. And this is one reason people only think they support censorship, when they really don't...


This is basically the ad for the movie now. I’ve never heard of in until now, but now I’m interested.

“Come see the anti-Nazi movie Facebook doesn’t want you to know about!”


'Bigteched'


Facebook has an explicit policy regarding Holocaust denial content: https://about.fb.com/news/2020/10/removing-holocaust-denial-... Given their human-reviewed decision to "permanently restrict" the advertising of "Beautiful Blue Eyes", this policy appears to be a disingenuous public relations stance. As director Joshua Newton contends, Facebook's tunnel vision adherence to their keyword flagging algorithm acts as a significant agent for the Holocaust denial movement.


"This is the action of haters – and there are sadly many in our society – who seek to damage the film in order to trivialize the Holocaust" Newton told Rolling Stone at the Toronto Film Festival.

There are a lot of stupid things Facebook is doing, but why, of all things, would you accuse them of trivializing the Holocaust? If anything, this kind of accusations (using the Holocaust as a defense for everything) trivializes the Holocaust.


Why? To force them to respond.


I think this might be implicitly accusing holocaust deniers of mass reporting and spreading misinformation about the content of the film so it gets pushed up the reporting queue and a Google search returns at least some results that appear to verify the false claims. This happens often enough to be recognized in marginalized communities.


Thats the current zeitgeist. Making extreme accusations, devaluing them and training people to not believe them per default.

The chaotic evil in me loves this because it opens up many ways to get away with bad takes.


> Mark Zuckerberg has created a monster that has no oversight.

You don’t say? I’m shocked I tell you.

Hopefully this will at least generate more buzz for the movie than Facebook could have.


They tell you that there's no oversight so that they aren't held responsible.


With the recently court decision against social media company censorship, this may become a major legal problem for Facebook.

https://www.dailywire.com/news/fifth-circuit-deals-huge-lega...

We can only hope!!


[flagged]


There's a sector of society that unironically uses phrases such as "anti-White hate-porn".

Do you really want to be part of that sector?


I'm sorry, appealing to in-group/out-group bias is a horrible way to address a point, regardless of what the point is.


Debate is literally appealing to in-group/out-group bias. The purpose of debate is to try to influence that bias.

The weird thing about debate is that the most ridiculous claims have the most evidence against them, and are therefore the most work to refute with sources. Claiming that the earth is flat takes a lot more evidence to refute than claiming that the earth has slightly incorrect parameters of oblate spheroid-ness.

This is played upon by certain people. Trying to explain why a concept such as "anti-white hate-porn" is just not a thing is like trying to explain how we're pretty sure we actually landed on the moon.

This comes back to the fallacy of "equal time for both sides of the debate".


https://pubmed.ncbi.nlm.nih.gov/34039063/

What would you call this, then? Sober, grounded analysis?

I'm sorry, but people getting away with shit like "white people are born to not being human" or "white people are born human, but abused by their parents into whiteness" is rather tiresome. People can just write 1930s Germany grade horseshit because the target's socially acceptable to the set in charge of a lot of social institutions.


>>This is fundamentally anti-White hate-porn that plays into tired and often ridiculously racist "Nazi" stereotypes.

I literally cannot believe anyone would write this in earnest. Please tell me it's some kind of (very poor) joke.


Whether or not OP is serious, people holding opinions like this (e.g., it’s possible to be racist against Nazis, who incidentally are not a race but rather a political affiliation) certainly exist. It’s a sad state of affairs.


[flagged]


I think the main reason for the discrepancy is that Nazis were active in Europe, which was culturally much closer to the US than East Asia.

For example, we have a lot more literature about the Holocaust than about Japanese atrocities in the Far East. One of the reasons is that the European population of the 1940s had much higher literacy levels and so there were enough people to actually write their experience down.

But it is also way easier to translate from, say, German or Dutch into English, than to do the same for Tagalog or Burmese.


[flagged]


Not the best place to play with semantic. There is a reason there is deep animosity between japan/china and japan/korea.

Just over 37-45 estimated war crimes resulted in 3 to 30 millions death ...


[flagged]


? You do not end up killing millions of peoples without systemic targetting and industrialized organisation. The fact that it might be called genocide or democide or something else is semantic ...

Anyway my point (albeit unclear) was more that you are arguing the premise of the parent post : "Germany and Japan committed roughly equivalent atrocities" which I believe is quite difficult and in a way reinforce the parent since he appears to believe there is a systemic diabolisation of the Nazi as way to target by proxy the white (anyway that's how I interpret: "great punching bags by proxy for what the what the author really wants to say")

Instead of arguing his association : "you'd expect that there would be a roughly equivalent denunciation in our literature." which I believe is way easier to refute and in the specific case of this movie the choice of the nazi might be easier to explain with the director parent history and not an hidden agenda.


> ? You do not end up killing millions of peoples without systemic targetting and industrialized organisation.

No, they do. That's how any war runs. Countries don't start wars with plans on for wiping out the citizens in the most efficient way. If a massacre happens, then it's something more spontaneous, happening on the spot.

> The fact that it might be called genocide or democide or something else is semantic ...

No, it's a significant difference in intention and execution. Germany had an elaborated logistic for transporting their victims through the country and continent. They had death camps with full planning on how to kill people. They even started the war with the side-intention to "cleaning the world".

Japan, like every other invader did nothing like this. They started wars and accepted that people will die, but this was not their goal. And neither were the massacres and other crimes their goal, it just something that happened along the way.

> Anyway my point (albeit unclear) was more that you are arguing the premise of the parent post : "Germany and Japan committed roughly equivalent atrocities" which I believe is quite difficult and in a way reinforce the parent since he appears to believe there is a systemic diabolisation of the Nazi as way to target by proxy the white (anyway that's how I interpret: "great punching bags by proxy for what the what the author really wants to say")

Sure, but your reasoning is wrong. Japan did bad things, but they are just one of many evil empires in history. It's not exceptional unique in what they did. But the Nazis were unique and exceptional. That's where your argument fails. Japan is just one of many, and people denounce those many equally, more or less. But there was just one event on the scale of the Nazi-Crimes.



[flagged]


> with atrocities which are not well known to western audiance

You say that as if this is a position the west found itself in rather than put itself in. Westerners generally aren't interested in Chinese corpses except as a justification for making more of them.


[flagged]


It could easily have a liberal bias and still ban you unjustly. STFU is hardly anything compared to people wishing their opponents got raped.


What's "Terf"? A name?


https://en.wikipedia.org/wiki/TERF

Trans exclusionary radical feminist

Although sometimes directed at people ho are just transphobes, it's aimed at some feminist who don't consider trans women women or trans men men and have a very negative anti trans views. See JK Rowling controversy


I mean, that is also the first thought I had when I read the title, even before delving into the article. If instead of "beautiful blue eyes" it was "silky smooth pale white skin", would it then be different?

For once I think Facebook is right. Poor choice of a film title, that, especially considering the theme of the film.


I assume the title is making a deliberate point.


There were lots of people with beautiful blue eyes and silky smooth pale white skin that perished in the Holocaust...

Eyeballing the difference between Ashkenazi and a regular person from Central or Eastern Europe is mission impossible.


[flagged]


This is literally the core point of the damn movie. I assure you that the director was aware.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: